00:00:00.000 Started by upstream project "autotest-per-patch" build number 127206 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:06.584 The recommended git tool is: git 00:00:06.585 using credential 00000000-0000-0000-0000-000000000002 00:00:06.586 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.600 Fetching changes from the remote Git repository 00:00:06.601 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.614 Using shallow fetch with depth 1 00:00:06.614 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.614 > git --version # timeout=10 00:00:06.627 > git --version # 'git version 2.39.2' 00:00:06.627 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.640 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.640 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.406 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.418 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.431 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:11.431 > git config core.sparsecheckout # timeout=10 00:00:11.445 > git read-tree -mu HEAD # timeout=10 00:00:11.463 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:11.484 Commit message: "packer: Add bios builder" 00:00:11.484 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:11.573 [Pipeline] Start of Pipeline 00:00:11.586 [Pipeline] library 00:00:11.587 Loading library shm_lib@master 00:00:11.587 Library shm_lib@master is cached. Copying from home. 00:00:11.602 [Pipeline] node 00:00:11.612 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:11.614 [Pipeline] { 00:00:11.621 [Pipeline] catchError 00:00:11.623 [Pipeline] { 00:00:11.639 [Pipeline] wrap 00:00:11.650 [Pipeline] { 00:00:11.658 [Pipeline] stage 00:00:11.659 [Pipeline] { (Prologue) 00:00:11.840 [Pipeline] sh 00:00:12.126 + logger -p user.info -t JENKINS-CI 00:00:12.145 [Pipeline] echo 00:00:12.146 Node: GP8 00:00:12.156 [Pipeline] sh 00:00:12.451 [Pipeline] setCustomBuildProperty 00:00:12.460 [Pipeline] echo 00:00:12.461 Cleanup processes 00:00:12.465 [Pipeline] sh 00:00:12.743 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.743 1898199 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.757 [Pipeline] sh 00:00:13.040 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.040 ++ awk '{print $1}' 00:00:13.040 ++ grep -v 'sudo pgrep' 00:00:13.040 + sudo kill -9 00:00:13.040 + true 00:00:13.056 [Pipeline] cleanWs 00:00:13.066 [WS-CLEANUP] Deleting project workspace... 00:00:13.066 [WS-CLEANUP] Deferred wipeout is used... 00:00:13.073 [WS-CLEANUP] done 00:00:13.079 [Pipeline] setCustomBuildProperty 00:00:13.097 [Pipeline] sh 00:00:13.379 + sudo git config --global --replace-all safe.directory '*' 00:00:13.464 [Pipeline] httpRequest 00:00:13.487 [Pipeline] echo 00:00:13.488 Sorcerer 10.211.164.101 is alive 00:00:13.496 [Pipeline] httpRequest 00:00:13.501 HttpMethod: GET 00:00:13.501 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:13.502 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:13.521 Response Code: HTTP/1.1 200 OK 00:00:13.522 Success: Status code 200 is in the accepted range: 200,404 00:00:13.522 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:38.892 [Pipeline] sh 00:00:39.175 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:39.452 [Pipeline] httpRequest 00:00:39.496 [Pipeline] echo 00:00:39.498 Sorcerer 10.211.164.101 is alive 00:00:39.509 [Pipeline] httpRequest 00:00:39.514 HttpMethod: GET 00:00:39.515 URL: http://10.211.164.101/packages/spdk_064b11df72dd46f002dc693baedf087482a8f735.tar.gz 00:00:39.515 Sending request to url: http://10.211.164.101/packages/spdk_064b11df72dd46f002dc693baedf087482a8f735.tar.gz 00:00:39.528 Response Code: HTTP/1.1 200 OK 00:00:39.529 Success: Status code 200 is in the accepted range: 200,404 00:00:39.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_064b11df72dd46f002dc693baedf087482a8f735.tar.gz 00:01:58.773 [Pipeline] sh 00:01:59.058 + tar --no-same-owner -xf spdk_064b11df72dd46f002dc693baedf087482a8f735.tar.gz 00:02:04.352 [Pipeline] sh 00:02:04.632 + git -C spdk log --oneline -n5 00:02:04.632 064b11df7 general: fix misspells and typos 00:02:04.632 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:02:04.632 fc2398dfa raid: clear base bdev configure_cb after executing 00:02:04.632 5558f3f50 raid: complete bdev_raid_create after sb is written 00:02:04.632 d005e023b raid: fix empty slot not updated in sb after resize 00:02:04.645 [Pipeline] } 00:02:04.664 [Pipeline] // stage 00:02:04.674 [Pipeline] stage 00:02:04.677 [Pipeline] { (Prepare) 00:02:04.696 [Pipeline] writeFile 00:02:04.715 [Pipeline] sh 00:02:04.997 + logger -p user.info -t JENKINS-CI 00:02:05.009 [Pipeline] sh 00:02:05.292 + logger -p user.info -t JENKINS-CI 00:02:05.304 [Pipeline] sh 00:02:05.586 + cat autorun-spdk.conf 00:02:05.586 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.586 SPDK_TEST_NVMF=1 00:02:05.586 SPDK_TEST_NVME_CLI=1 00:02:05.586 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.586 SPDK_TEST_NVMF_NICS=e810 00:02:05.586 SPDK_TEST_VFIOUSER=1 00:02:05.586 SPDK_RUN_UBSAN=1 00:02:05.586 NET_TYPE=phy 00:02:05.593 RUN_NIGHTLY=0 00:02:05.597 [Pipeline] readFile 00:02:05.620 [Pipeline] withEnv 00:02:05.622 [Pipeline] { 00:02:05.635 [Pipeline] sh 00:02:05.916 + set -ex 00:02:05.916 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:05.916 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.916 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.916 ++ SPDK_TEST_NVMF=1 00:02:05.916 ++ SPDK_TEST_NVME_CLI=1 00:02:05.916 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.916 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.916 ++ SPDK_TEST_VFIOUSER=1 00:02:05.916 ++ SPDK_RUN_UBSAN=1 00:02:05.916 ++ NET_TYPE=phy 00:02:05.916 ++ RUN_NIGHTLY=0 00:02:05.916 + case $SPDK_TEST_NVMF_NICS in 00:02:05.916 + DRIVERS=ice 00:02:05.916 + [[ tcp == \r\d\m\a ]] 00:02:05.916 + [[ -n ice ]] 00:02:05.916 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:05.916 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:05.916 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:05.916 rmmod: ERROR: Module irdma is not currently loaded 00:02:05.916 rmmod: ERROR: Module i40iw is not currently loaded 00:02:05.916 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:05.916 + true 00:02:05.916 + for D in $DRIVERS 00:02:05.916 + sudo modprobe ice 00:02:05.916 + exit 0 00:02:05.926 [Pipeline] } 00:02:05.946 [Pipeline] // withEnv 00:02:05.952 [Pipeline] } 00:02:05.971 [Pipeline] // stage 00:02:05.981 [Pipeline] catchError 00:02:05.983 [Pipeline] { 00:02:05.995 [Pipeline] timeout 00:02:05.996 Timeout set to expire in 50 min 00:02:05.998 [Pipeline] { 00:02:06.012 [Pipeline] stage 00:02:06.014 [Pipeline] { (Tests) 00:02:06.029 [Pipeline] sh 00:02:06.308 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.308 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.308 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.308 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:06.308 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.308 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.308 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:06.308 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.308 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.308 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.308 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:06.308 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.308 + source /etc/os-release 00:02:06.308 ++ NAME='Fedora Linux' 00:02:06.308 ++ VERSION='38 (Cloud Edition)' 00:02:06.308 ++ ID=fedora 00:02:06.308 ++ VERSION_ID=38 00:02:06.308 ++ VERSION_CODENAME= 00:02:06.308 ++ PLATFORM_ID=platform:f38 00:02:06.308 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:06.308 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.308 ++ LOGO=fedora-logo-icon 00:02:06.308 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:06.308 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.308 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:06.308 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.308 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.308 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.308 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:06.308 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.308 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:06.308 ++ SUPPORT_END=2024-05-14 00:02:06.308 ++ VARIANT='Cloud Edition' 00:02:06.308 ++ VARIANT_ID=cloud 00:02:06.308 + uname -a 00:02:06.308 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:06.308 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:07.687 Hugepages 00:02:07.687 node hugesize free / total 00:02:07.687 node0 1048576kB 0 / 0 00:02:07.687 node0 2048kB 0 / 0 00:02:07.687 node1 1048576kB 0 / 0 00:02:07.687 node1 2048kB 0 / 0 00:02:07.687 00:02:07.687 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.687 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:07.687 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:07.687 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:07.687 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:07.945 + rm -f /tmp/spdk-ld-path 00:02:07.945 + source autorun-spdk.conf 00:02:07.945 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.945 ++ SPDK_TEST_NVMF=1 00:02:07.945 ++ SPDK_TEST_NVME_CLI=1 00:02:07.945 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.945 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.945 ++ SPDK_TEST_VFIOUSER=1 00:02:07.945 ++ SPDK_RUN_UBSAN=1 00:02:07.945 ++ NET_TYPE=phy 00:02:07.945 ++ RUN_NIGHTLY=0 00:02:07.945 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.945 + [[ -n '' ]] 00:02:07.945 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.945 + for M in /var/spdk/build-*-manifest.txt 00:02:07.945 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.945 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.945 + for M in /var/spdk/build-*-manifest.txt 00:02:07.945 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.945 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.945 ++ uname 00:02:07.945 + [[ Linux == \L\i\n\u\x ]] 00:02:07.945 + sudo dmesg -T 00:02:07.945 + sudo dmesg --clear 00:02:07.945 + dmesg_pid=1899501 00:02:07.945 + [[ Fedora Linux == FreeBSD ]] 00:02:07.945 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.945 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.945 + sudo dmesg -Tw 00:02:07.945 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.945 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.945 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.945 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.945 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.945 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.945 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.945 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.945 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.945 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.945 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.945 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.945 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.945 Test configuration: 00:02:07.945 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.945 SPDK_TEST_NVMF=1 00:02:07.945 SPDK_TEST_NVME_CLI=1 00:02:07.945 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.945 SPDK_TEST_NVMF_NICS=e810 00:02:07.945 SPDK_TEST_VFIOUSER=1 00:02:07.945 SPDK_RUN_UBSAN=1 00:02:07.945 NET_TYPE=phy 00:02:07.945 RUN_NIGHTLY=0 11:10:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.945 11:10:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.945 11:10:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.945 11:10:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.945 11:10:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.945 11:10:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.945 11:10:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.945 11:10:03 -- paths/export.sh@5 -- $ export PATH 00:02:07.945 11:10:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.945 11:10:03 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.945 11:10:03 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:07.945 11:10:03 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721985003.XXXXXX 00:02:07.945 11:10:03 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721985003.7NRJ3Q 00:02:07.945 11:10:03 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:07.945 11:10:03 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:07.945 11:10:03 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:07.945 11:10:03 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:07.945 11:10:03 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.945 11:10:03 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:07.945 11:10:03 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:07.945 11:10:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.946 11:10:03 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:07.946 11:10:03 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:07.946 11:10:03 -- pm/common@17 -- $ local monitor 00:02:07.946 11:10:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 11:10:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 11:10:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 11:10:03 -- pm/common@21 -- $ date +%s 00:02:07.946 11:10:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.946 11:10:03 -- pm/common@21 -- $ date +%s 00:02:07.946 11:10:03 -- pm/common@25 -- $ sleep 1 00:02:07.946 11:10:03 -- pm/common@21 -- $ date +%s 00:02:07.946 11:10:03 -- pm/common@21 -- $ date +%s 00:02:07.946 11:10:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721985003 00:02:07.946 11:10:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721985003 00:02:07.946 11:10:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721985003 00:02:07.946 11:10:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721985003 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721985003_collect-vmstat.pm.log 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721985003_collect-cpu-load.pm.log 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721985003_collect-cpu-temp.pm.log 00:02:07.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721985003_collect-bmc-pm.bmc.pm.log 00:02:09.321 11:10:04 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:09.321 11:10:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.321 11:10:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.321 11:10:04 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.321 11:10:04 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.321 Fri Jul 26 09:10:04 AM UTC 2024 00:02:09.321 11:10:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.321 v24.09-pre-322-g064b11df7 00:02:09.321 11:10:04 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:09.321 11:10:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.321 11:10:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.321 11:10:04 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:09.321 11:10:04 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:09.321 11:10:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.321 ************************************ 00:02:09.321 START TEST ubsan 00:02:09.321 ************************************ 00:02:09.321 11:10:04 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:09.321 using ubsan 00:02:09.321 00:02:09.321 real 0m0.000s 00:02:09.321 user 0m0.000s 00:02:09.321 sys 0m0.000s 00:02:09.321 11:10:04 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:09.321 11:10:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.321 ************************************ 00:02:09.321 END TEST ubsan 00:02:09.321 ************************************ 00:02:09.321 11:10:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.321 11:10:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.321 11:10:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.322 11:10:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.322 11:10:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.322 11:10:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.322 11:10:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.322 11:10:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.322 11:10:04 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:09.322 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:09.322 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:09.581 Using 'verbs' RDMA provider 00:02:25.405 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:37.612 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:37.877 Creating mk/config.mk...done. 00:02:37.877 Creating mk/cc.flags.mk...done. 00:02:37.877 Type 'make' to build. 00:02:37.877 11:10:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:37.877 11:10:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:37.877 11:10:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:37.877 11:10:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.877 ************************************ 00:02:37.877 START TEST make 00:02:37.877 ************************************ 00:02:37.877 11:10:33 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:38.139 make[1]: Nothing to be done for 'all'. 00:02:40.059 The Meson build system 00:02:40.059 Version: 1.3.1 00:02:40.059 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:40.059 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.059 Build type: native build 00:02:40.059 Project name: libvfio-user 00:02:40.059 Project version: 0.0.1 00:02:40.059 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:40.059 C linker for the host machine: cc ld.bfd 2.39-16 00:02:40.059 Host machine cpu family: x86_64 00:02:40.059 Host machine cpu: x86_64 00:02:40.059 Run-time dependency threads found: YES 00:02:40.059 Library dl found: YES 00:02:40.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:40.059 Run-time dependency json-c found: YES 0.17 00:02:40.059 Run-time dependency cmocka found: YES 1.1.7 00:02:40.059 Program pytest-3 found: NO 00:02:40.059 Program flake8 found: NO 00:02:40.059 Program misspell-fixer found: NO 00:02:40.059 Program restructuredtext-lint found: NO 00:02:40.059 Program valgrind found: YES (/usr/bin/valgrind) 00:02:40.059 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.059 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.059 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.059 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.059 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:40.059 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:40.059 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.059 Build targets in project: 8 00:02:40.059 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:40.059 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:40.059 00:02:40.059 libvfio-user 0.0.1 00:02:40.059 00:02:40.059 User defined options 00:02:40.059 buildtype : debug 00:02:40.059 default_library: shared 00:02:40.059 libdir : /usr/local/lib 00:02:40.059 00:02:40.059 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.633 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:40.633 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:40.633 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:40.633 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:40.633 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:40.895 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:40.895 [6/37] Compiling C object samples/null.p/null.c.o 00:02:40.895 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:40.895 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:40.895 [9/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:40.895 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:40.895 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:40.895 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:40.895 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:40.895 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:40.895 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:40.895 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:40.895 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:40.895 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:40.895 [19/37] Compiling C object samples/client.p/client.c.o 00:02:40.895 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:40.895 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:40.895 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:40.895 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:40.895 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:40.895 [25/37] Compiling C object samples/server.p/server.c.o 00:02:40.895 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:40.895 [27/37] Linking target samples/client 00:02:41.160 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:41.160 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:41.160 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:41.419 [31/37] Linking target test/unit_tests 00:02:41.419 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:41.419 [33/37] Linking target samples/server 00:02:41.419 [34/37] Linking target samples/null 00:02:41.419 [35/37] Linking target samples/lspci 00:02:41.419 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:41.419 [37/37] Linking target samples/gpio-pci-idio-16 00:02:41.419 INFO: autodetecting backend as ninja 00:02:41.419 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:41.419 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:42.365 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:42.365 ninja: no work to do. 00:02:47.635 The Meson build system 00:02:47.635 Version: 1.3.1 00:02:47.635 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:47.635 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:47.635 Build type: native build 00:02:47.635 Program cat found: YES (/usr/bin/cat) 00:02:47.635 Project name: DPDK 00:02:47.635 Project version: 24.03.0 00:02:47.635 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:47.635 C linker for the host machine: cc ld.bfd 2.39-16 00:02:47.635 Host machine cpu family: x86_64 00:02:47.635 Host machine cpu: x86_64 00:02:47.635 Message: ## Building in Developer Mode ## 00:02:47.635 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:47.635 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:47.635 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:47.635 Program python3 found: YES (/usr/bin/python3) 00:02:47.635 Program cat found: YES (/usr/bin/cat) 00:02:47.635 Compiler for C supports arguments -march=native: YES 00:02:47.635 Checking for size of "void *" : 8 00:02:47.635 Checking for size of "void *" : 8 (cached) 00:02:47.635 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:47.635 Library m found: YES 00:02:47.635 Library numa found: YES 00:02:47.635 Has header "numaif.h" : YES 00:02:47.635 Library fdt found: NO 00:02:47.635 Library execinfo found: NO 00:02:47.635 Has header "execinfo.h" : YES 00:02:47.635 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:47.635 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:47.635 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:47.635 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:47.635 Run-time dependency openssl found: YES 3.0.9 00:02:47.635 Run-time dependency libpcap found: YES 1.10.4 00:02:47.635 Has header "pcap.h" with dependency libpcap: YES 00:02:47.635 Compiler for C supports arguments -Wcast-qual: YES 00:02:47.635 Compiler for C supports arguments -Wdeprecated: YES 00:02:47.635 Compiler for C supports arguments -Wformat: YES 00:02:47.635 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:47.635 Compiler for C supports arguments -Wformat-security: NO 00:02:47.635 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.635 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:47.635 Compiler for C supports arguments -Wnested-externs: YES 00:02:47.635 Compiler for C supports arguments -Wold-style-definition: YES 00:02:47.636 Compiler for C supports arguments -Wpointer-arith: YES 00:02:47.636 Compiler for C supports arguments -Wsign-compare: YES 00:02:47.636 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:47.636 Compiler for C supports arguments -Wundef: YES 00:02:47.636 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.636 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:47.636 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:47.636 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.636 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:47.636 Program objdump found: YES (/usr/bin/objdump) 00:02:47.636 Compiler for C supports arguments -mavx512f: YES 00:02:47.636 Checking if "AVX512 checking" compiles: YES 00:02:47.636 Fetching value of define "__SSE4_2__" : 1 00:02:47.636 Fetching value of define "__AES__" : 1 00:02:47.636 Fetching value of define "__AVX__" : 1 00:02:47.636 Fetching value of define "__AVX2__" : (undefined) 00:02:47.636 Fetching value of define "__AVX512BW__" : (undefined) 00:02:47.636 Fetching value of define "__AVX512CD__" : (undefined) 00:02:47.636 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:47.636 Fetching value of define "__AVX512F__" : (undefined) 00:02:47.636 Fetching value of define "__AVX512VL__" : (undefined) 00:02:47.636 Fetching value of define "__PCLMUL__" : 1 00:02:47.636 Fetching value of define "__RDRND__" : 1 00:02:47.636 Fetching value of define "__RDSEED__" : (undefined) 00:02:47.636 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:47.636 Fetching value of define "__znver1__" : (undefined) 00:02:47.636 Fetching value of define "__znver2__" : (undefined) 00:02:47.636 Fetching value of define "__znver3__" : (undefined) 00:02:47.636 Fetching value of define "__znver4__" : (undefined) 00:02:47.636 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:47.636 Message: lib/log: Defining dependency "log" 00:02:47.636 Message: lib/kvargs: Defining dependency "kvargs" 00:02:47.636 Message: lib/telemetry: Defining dependency "telemetry" 00:02:47.636 Checking for function "getentropy" : NO 00:02:47.636 Message: lib/eal: Defining dependency "eal" 00:02:47.636 Message: lib/ring: Defining dependency "ring" 00:02:47.636 Message: lib/rcu: Defining dependency "rcu" 00:02:47.636 Message: lib/mempool: Defining dependency "mempool" 00:02:47.636 Message: lib/mbuf: Defining dependency "mbuf" 00:02:47.636 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:47.636 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:47.636 Compiler for C supports arguments -mpclmul: YES 00:02:47.636 Compiler for C supports arguments -maes: YES 00:02:47.636 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:47.636 Compiler for C supports arguments -mavx512bw: YES 00:02:47.636 Compiler for C supports arguments -mavx512dq: YES 00:02:47.636 Compiler for C supports arguments -mavx512vl: YES 00:02:47.636 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:47.636 Compiler for C supports arguments -mavx2: YES 00:02:47.636 Compiler for C supports arguments -mavx: YES 00:02:47.636 Message: lib/net: Defining dependency "net" 00:02:47.636 Message: lib/meter: Defining dependency "meter" 00:02:47.636 Message: lib/ethdev: Defining dependency "ethdev" 00:02:47.636 Message: lib/pci: Defining dependency "pci" 00:02:47.636 Message: lib/cmdline: Defining dependency "cmdline" 00:02:47.636 Message: lib/hash: Defining dependency "hash" 00:02:47.636 Message: lib/timer: Defining dependency "timer" 00:02:47.636 Message: lib/compressdev: Defining dependency "compressdev" 00:02:47.636 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:47.636 Message: lib/dmadev: Defining dependency "dmadev" 00:02:47.636 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:47.636 Message: lib/power: Defining dependency "power" 00:02:47.636 Message: lib/reorder: Defining dependency "reorder" 00:02:47.636 Message: lib/security: Defining dependency "security" 00:02:47.636 Has header "linux/userfaultfd.h" : YES 00:02:47.636 Has header "linux/vduse.h" : YES 00:02:47.636 Message: lib/vhost: Defining dependency "vhost" 00:02:47.636 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:47.636 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:47.636 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:47.636 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:47.636 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:47.636 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:47.636 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:47.636 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:47.636 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:47.636 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:47.636 Program doxygen found: YES (/usr/bin/doxygen) 00:02:47.636 Configuring doxy-api-html.conf using configuration 00:02:47.636 Configuring doxy-api-man.conf using configuration 00:02:47.636 Program mandb found: YES (/usr/bin/mandb) 00:02:47.636 Program sphinx-build found: NO 00:02:47.636 Configuring rte_build_config.h using configuration 00:02:47.636 Message: 00:02:47.636 ================= 00:02:47.636 Applications Enabled 00:02:47.636 ================= 00:02:47.636 00:02:47.636 apps: 00:02:47.636 00:02:47.636 00:02:47.636 Message: 00:02:47.636 ================= 00:02:47.636 Libraries Enabled 00:02:47.636 ================= 00:02:47.636 00:02:47.636 libs: 00:02:47.636 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:47.636 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:47.636 cryptodev, dmadev, power, reorder, security, vhost, 00:02:47.636 00:02:47.636 Message: 00:02:47.636 =============== 00:02:47.636 Drivers Enabled 00:02:47.636 =============== 00:02:47.636 00:02:47.636 common: 00:02:47.636 00:02:47.636 bus: 00:02:47.636 pci, vdev, 00:02:47.636 mempool: 00:02:47.636 ring, 00:02:47.636 dma: 00:02:47.636 00:02:47.636 net: 00:02:47.636 00:02:47.636 crypto: 00:02:47.636 00:02:47.636 compress: 00:02:47.636 00:02:47.636 vdpa: 00:02:47.636 00:02:47.636 00:02:47.636 Message: 00:02:47.636 ================= 00:02:47.636 Content Skipped 00:02:47.636 ================= 00:02:47.636 00:02:47.636 apps: 00:02:47.636 dumpcap: explicitly disabled via build config 00:02:47.636 graph: explicitly disabled via build config 00:02:47.636 pdump: explicitly disabled via build config 00:02:47.636 proc-info: explicitly disabled via build config 00:02:47.636 test-acl: explicitly disabled via build config 00:02:47.636 test-bbdev: explicitly disabled via build config 00:02:47.636 test-cmdline: explicitly disabled via build config 00:02:47.636 test-compress-perf: explicitly disabled via build config 00:02:47.636 test-crypto-perf: explicitly disabled via build config 00:02:47.636 test-dma-perf: explicitly disabled via build config 00:02:47.636 test-eventdev: explicitly disabled via build config 00:02:47.636 test-fib: explicitly disabled via build config 00:02:47.636 test-flow-perf: explicitly disabled via build config 00:02:47.636 test-gpudev: explicitly disabled via build config 00:02:47.636 test-mldev: explicitly disabled via build config 00:02:47.636 test-pipeline: explicitly disabled via build config 00:02:47.636 test-pmd: explicitly disabled via build config 00:02:47.636 test-regex: explicitly disabled via build config 00:02:47.636 test-sad: explicitly disabled via build config 00:02:47.636 test-security-perf: explicitly disabled via build config 00:02:47.636 00:02:47.636 libs: 00:02:47.636 argparse: explicitly disabled via build config 00:02:47.636 metrics: explicitly disabled via build config 00:02:47.636 acl: explicitly disabled via build config 00:02:47.636 bbdev: explicitly disabled via build config 00:02:47.636 bitratestats: explicitly disabled via build config 00:02:47.636 bpf: explicitly disabled via build config 00:02:47.636 cfgfile: explicitly disabled via build config 00:02:47.636 distributor: explicitly disabled via build config 00:02:47.636 efd: explicitly disabled via build config 00:02:47.636 eventdev: explicitly disabled via build config 00:02:47.636 dispatcher: explicitly disabled via build config 00:02:47.636 gpudev: explicitly disabled via build config 00:02:47.636 gro: explicitly disabled via build config 00:02:47.636 gso: explicitly disabled via build config 00:02:47.636 ip_frag: explicitly disabled via build config 00:02:47.636 jobstats: explicitly disabled via build config 00:02:47.637 latencystats: explicitly disabled via build config 00:02:47.637 lpm: explicitly disabled via build config 00:02:47.637 member: explicitly disabled via build config 00:02:47.637 pcapng: explicitly disabled via build config 00:02:47.637 rawdev: explicitly disabled via build config 00:02:47.637 regexdev: explicitly disabled via build config 00:02:47.637 mldev: explicitly disabled via build config 00:02:47.637 rib: explicitly disabled via build config 00:02:47.637 sched: explicitly disabled via build config 00:02:47.637 stack: explicitly disabled via build config 00:02:47.637 ipsec: explicitly disabled via build config 00:02:47.637 pdcp: explicitly disabled via build config 00:02:47.637 fib: explicitly disabled via build config 00:02:47.637 port: explicitly disabled via build config 00:02:47.637 pdump: explicitly disabled via build config 00:02:47.637 table: explicitly disabled via build config 00:02:47.637 pipeline: explicitly disabled via build config 00:02:47.637 graph: explicitly disabled via build config 00:02:47.637 node: explicitly disabled via build config 00:02:47.637 00:02:47.637 drivers: 00:02:47.637 common/cpt: not in enabled drivers build config 00:02:47.637 common/dpaax: not in enabled drivers build config 00:02:47.637 common/iavf: not in enabled drivers build config 00:02:47.637 common/idpf: not in enabled drivers build config 00:02:47.637 common/ionic: not in enabled drivers build config 00:02:47.637 common/mvep: not in enabled drivers build config 00:02:47.637 common/octeontx: not in enabled drivers build config 00:02:47.637 bus/auxiliary: not in enabled drivers build config 00:02:47.637 bus/cdx: not in enabled drivers build config 00:02:47.637 bus/dpaa: not in enabled drivers build config 00:02:47.637 bus/fslmc: not in enabled drivers build config 00:02:47.637 bus/ifpga: not in enabled drivers build config 00:02:47.637 bus/platform: not in enabled drivers build config 00:02:47.637 bus/uacce: not in enabled drivers build config 00:02:47.637 bus/vmbus: not in enabled drivers build config 00:02:47.637 common/cnxk: not in enabled drivers build config 00:02:47.637 common/mlx5: not in enabled drivers build config 00:02:47.637 common/nfp: not in enabled drivers build config 00:02:47.637 common/nitrox: not in enabled drivers build config 00:02:47.637 common/qat: not in enabled drivers build config 00:02:47.637 common/sfc_efx: not in enabled drivers build config 00:02:47.637 mempool/bucket: not in enabled drivers build config 00:02:47.637 mempool/cnxk: not in enabled drivers build config 00:02:47.637 mempool/dpaa: not in enabled drivers build config 00:02:47.637 mempool/dpaa2: not in enabled drivers build config 00:02:47.637 mempool/octeontx: not in enabled drivers build config 00:02:47.637 mempool/stack: not in enabled drivers build config 00:02:47.637 dma/cnxk: not in enabled drivers build config 00:02:47.637 dma/dpaa: not in enabled drivers build config 00:02:47.637 dma/dpaa2: not in enabled drivers build config 00:02:47.637 dma/hisilicon: not in enabled drivers build config 00:02:47.637 dma/idxd: not in enabled drivers build config 00:02:47.637 dma/ioat: not in enabled drivers build config 00:02:47.637 dma/skeleton: not in enabled drivers build config 00:02:47.637 net/af_packet: not in enabled drivers build config 00:02:47.637 net/af_xdp: not in enabled drivers build config 00:02:47.637 net/ark: not in enabled drivers build config 00:02:47.637 net/atlantic: not in enabled drivers build config 00:02:47.637 net/avp: not in enabled drivers build config 00:02:47.637 net/axgbe: not in enabled drivers build config 00:02:47.637 net/bnx2x: not in enabled drivers build config 00:02:47.637 net/bnxt: not in enabled drivers build config 00:02:47.637 net/bonding: not in enabled drivers build config 00:02:47.637 net/cnxk: not in enabled drivers build config 00:02:47.637 net/cpfl: not in enabled drivers build config 00:02:47.637 net/cxgbe: not in enabled drivers build config 00:02:47.637 net/dpaa: not in enabled drivers build config 00:02:47.637 net/dpaa2: not in enabled drivers build config 00:02:47.637 net/e1000: not in enabled drivers build config 00:02:47.637 net/ena: not in enabled drivers build config 00:02:47.637 net/enetc: not in enabled drivers build config 00:02:47.637 net/enetfec: not in enabled drivers build config 00:02:47.637 net/enic: not in enabled drivers build config 00:02:47.637 net/failsafe: not in enabled drivers build config 00:02:47.637 net/fm10k: not in enabled drivers build config 00:02:47.637 net/gve: not in enabled drivers build config 00:02:47.637 net/hinic: not in enabled drivers build config 00:02:47.637 net/hns3: not in enabled drivers build config 00:02:47.637 net/i40e: not in enabled drivers build config 00:02:47.637 net/iavf: not in enabled drivers build config 00:02:47.637 net/ice: not in enabled drivers build config 00:02:47.637 net/idpf: not in enabled drivers build config 00:02:47.637 net/igc: not in enabled drivers build config 00:02:47.637 net/ionic: not in enabled drivers build config 00:02:47.637 net/ipn3ke: not in enabled drivers build config 00:02:47.637 net/ixgbe: not in enabled drivers build config 00:02:47.637 net/mana: not in enabled drivers build config 00:02:47.637 net/memif: not in enabled drivers build config 00:02:47.637 net/mlx4: not in enabled drivers build config 00:02:47.637 net/mlx5: not in enabled drivers build config 00:02:47.637 net/mvneta: not in enabled drivers build config 00:02:47.637 net/mvpp2: not in enabled drivers build config 00:02:47.637 net/netvsc: not in enabled drivers build config 00:02:47.637 net/nfb: not in enabled drivers build config 00:02:47.637 net/nfp: not in enabled drivers build config 00:02:47.637 net/ngbe: not in enabled drivers build config 00:02:47.637 net/null: not in enabled drivers build config 00:02:47.637 net/octeontx: not in enabled drivers build config 00:02:47.637 net/octeon_ep: not in enabled drivers build config 00:02:47.637 net/pcap: not in enabled drivers build config 00:02:47.637 net/pfe: not in enabled drivers build config 00:02:47.637 net/qede: not in enabled drivers build config 00:02:47.637 net/ring: not in enabled drivers build config 00:02:47.637 net/sfc: not in enabled drivers build config 00:02:47.637 net/softnic: not in enabled drivers build config 00:02:47.637 net/tap: not in enabled drivers build config 00:02:47.637 net/thunderx: not in enabled drivers build config 00:02:47.637 net/txgbe: not in enabled drivers build config 00:02:47.637 net/vdev_netvsc: not in enabled drivers build config 00:02:47.637 net/vhost: not in enabled drivers build config 00:02:47.637 net/virtio: not in enabled drivers build config 00:02:47.637 net/vmxnet3: not in enabled drivers build config 00:02:47.637 raw/*: missing internal dependency, "rawdev" 00:02:47.637 crypto/armv8: not in enabled drivers build config 00:02:47.637 crypto/bcmfs: not in enabled drivers build config 00:02:47.637 crypto/caam_jr: not in enabled drivers build config 00:02:47.637 crypto/ccp: not in enabled drivers build config 00:02:47.637 crypto/cnxk: not in enabled drivers build config 00:02:47.637 crypto/dpaa_sec: not in enabled drivers build config 00:02:47.637 crypto/dpaa2_sec: not in enabled drivers build config 00:02:47.637 crypto/ipsec_mb: not in enabled drivers build config 00:02:47.637 crypto/mlx5: not in enabled drivers build config 00:02:47.637 crypto/mvsam: not in enabled drivers build config 00:02:47.637 crypto/nitrox: not in enabled drivers build config 00:02:47.637 crypto/null: not in enabled drivers build config 00:02:47.637 crypto/octeontx: not in enabled drivers build config 00:02:47.637 crypto/openssl: not in enabled drivers build config 00:02:47.637 crypto/scheduler: not in enabled drivers build config 00:02:47.637 crypto/uadk: not in enabled drivers build config 00:02:47.637 crypto/virtio: not in enabled drivers build config 00:02:47.637 compress/isal: not in enabled drivers build config 00:02:47.637 compress/mlx5: not in enabled drivers build config 00:02:47.637 compress/nitrox: not in enabled drivers build config 00:02:47.637 compress/octeontx: not in enabled drivers build config 00:02:47.637 compress/zlib: not in enabled drivers build config 00:02:47.637 regex/*: missing internal dependency, "regexdev" 00:02:47.637 ml/*: missing internal dependency, "mldev" 00:02:47.637 vdpa/ifc: not in enabled drivers build config 00:02:47.637 vdpa/mlx5: not in enabled drivers build config 00:02:47.637 vdpa/nfp: not in enabled drivers build config 00:02:47.637 vdpa/sfc: not in enabled drivers build config 00:02:47.638 event/*: missing internal dependency, "eventdev" 00:02:47.638 baseband/*: missing internal dependency, "bbdev" 00:02:47.638 gpu/*: missing internal dependency, "gpudev" 00:02:47.638 00:02:47.638 00:02:48.202 Build targets in project: 85 00:02:48.202 00:02:48.202 DPDK 24.03.0 00:02:48.202 00:02:48.202 User defined options 00:02:48.202 buildtype : debug 00:02:48.202 default_library : shared 00:02:48.202 libdir : lib 00:02:48.202 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:48.202 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:48.202 c_link_args : 00:02:48.202 cpu_instruction_set: native 00:02:48.202 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:48.202 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:48.202 enable_docs : false 00:02:48.202 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:48.202 enable_kmods : false 00:02:48.202 max_lcores : 128 00:02:48.202 tests : false 00:02:48.202 00:02:48.203 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.775 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:48.775 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:48.775 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.775 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:48.775 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.775 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:49.073 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.073 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.073 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.073 [9/268] Linking static target lib/librte_kvargs.a 00:02:49.073 [10/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.073 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.073 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.073 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.073 [14/268] Linking static target lib/librte_log.a 00:02:49.073 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:49.073 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:49.660 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.660 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.660 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.660 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.660 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.660 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:49.923 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.923 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.923 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.923 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.923 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.923 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.923 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.923 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.923 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.923 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.923 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:49.923 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.923 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.923 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.923 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:49.923 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.923 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.923 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.923 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.923 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:49.923 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:49.923 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.923 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.923 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.923 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:49.923 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.923 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:49.923 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:49.923 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:49.923 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.923 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:49.923 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.923 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.923 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:49.923 [57/268] Linking static target lib/librte_telemetry.a 00:02:49.923 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.923 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.185 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.185 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.185 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.185 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.185 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.185 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.185 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.446 [67/268] Linking target lib/librte_log.so.24.1 00:02:50.446 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.446 [69/268] Linking static target lib/librte_pci.a 00:02:50.446 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.705 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.705 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:50.705 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.705 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.705 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.705 [76/268] Linking target lib/librte_kvargs.so.24.1 00:02:50.705 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.705 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.705 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:50.705 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.966 [81/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.966 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.966 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.966 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.966 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.966 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.966 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.966 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:50.966 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.966 [90/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:50.966 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.966 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:50.966 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:50.967 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.967 [95/268] Linking static target lib/librte_ring.a 00:02:50.967 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.967 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:50.967 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.967 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.967 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.967 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:50.967 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:50.967 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.967 [104/268] Linking static target lib/librte_meter.a 00:02:50.967 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.967 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.967 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:50.967 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.967 [109/268] Linking static target lib/librte_eal.a 00:02:50.967 [110/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.233 [111/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.233 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.233 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:51.233 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.233 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:51.233 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.233 [117/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.233 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:51.233 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.233 [120/268] Linking static target lib/librte_mempool.a 00:02:51.233 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.233 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:51.233 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.233 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.233 [125/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.233 [126/268] Linking static target lib/librte_rcu.a 00:02:51.233 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.492 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.492 [129/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.492 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.492 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.492 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.492 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.492 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.754 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.754 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.754 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.754 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.754 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.754 [140/268] Linking static target lib/librte_net.a 00:02:51.754 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.754 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.754 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.754 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.754 [145/268] Linking static target lib/librte_cmdline.a 00:02:51.754 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.015 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.015 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.015 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.015 [150/268] Linking static target lib/librte_timer.a 00:02:52.015 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.015 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:52.015 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.015 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.015 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.015 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.274 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.274 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.274 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.274 [160/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.274 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.274 [162/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.274 [163/268] Linking static target lib/librte_dmadev.a 00:02:52.274 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.274 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.274 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.274 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.274 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.274 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:52.532 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.532 [171/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.532 [172/268] Linking static target lib/librte_hash.a 00:02:52.532 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:52.532 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.532 [175/268] Linking static target lib/librte_compressdev.a 00:02:52.532 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.532 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.532 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.532 [179/268] Linking static target lib/librte_power.a 00:02:52.532 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.533 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:52.533 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.791 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.791 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.791 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.791 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.791 [187/268] Linking static target lib/librte_reorder.a 00:02:52.791 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.791 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.791 [190/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.791 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.791 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.791 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.791 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.791 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.049 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.049 [197/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:53.049 [198/268] Linking static target lib/librte_mbuf.a 00:02:53.049 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.049 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.049 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.049 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:53.049 [203/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.049 [204/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.049 [205/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.049 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.049 [207/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.049 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.049 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.049 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.049 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:53.049 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.049 [213/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.049 [214/268] Linking static target lib/librte_security.a 00:02:53.307 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.307 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.307 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:53.307 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.307 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.307 [220/268] Linking static target drivers/librte_mempool_ring.a 00:02:53.307 [221/268] Linking static target lib/librte_ethdev.a 00:02:53.307 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.565 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.565 [224/268] Linking static target lib/librte_cryptodev.a 00:02:53.565 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.565 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.939 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.915 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.205 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.205 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.205 [231/268] Linking target lib/librte_eal.so.24.1 00:02:59.205 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.205 [233/268] Linking target lib/librte_pci.so.24.1 00:02:59.205 [234/268] Linking target lib/librte_timer.so.24.1 00:02:59.205 [235/268] Linking target lib/librte_meter.so.24.1 00:02:59.205 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.205 [237/268] Linking target lib/librte_ring.so.24.1 00:02:59.205 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:59.205 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.205 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:59.205 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.205 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.205 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:59.205 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:59.205 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.205 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:59.463 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.463 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.463 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.463 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:59.722 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.722 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:59.722 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:59.722 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:59.722 [255/268] Linking target lib/librte_net.so.24.1 00:02:59.980 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.980 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.980 [258/268] Linking target lib/librte_hash.so.24.1 00:02:59.980 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:59.981 [260/268] Linking target lib/librte_security.so.24.1 00:02:59.981 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:00.239 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:00.239 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:00.239 [264/268] Linking target lib/librte_power.so.24.1 00:03:08.359 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.359 [266/268] Linking static target lib/librte_vhost.a 00:03:08.620 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.620 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:08.620 INFO: autodetecting backend as ninja 00:03:08.620 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:10.557 CC lib/ut_mock/mock.o 00:03:10.557 CC lib/ut/ut.o 00:03:10.557 CC lib/log/log.o 00:03:10.557 CC lib/log/log_flags.o 00:03:10.557 CC lib/log/log_deprecated.o 00:03:10.557 LIB libspdk_ut_mock.a 00:03:10.557 LIB libspdk_ut.a 00:03:10.557 LIB libspdk_log.a 00:03:10.557 SO libspdk_ut_mock.so.6.0 00:03:10.557 SO libspdk_log.so.7.0 00:03:10.557 SO libspdk_ut.so.2.0 00:03:10.816 SYMLINK libspdk_ut_mock.so 00:03:10.816 SYMLINK libspdk_ut.so 00:03:10.816 SYMLINK libspdk_log.so 00:03:10.816 CC lib/ioat/ioat.o 00:03:10.816 CXX lib/trace_parser/trace.o 00:03:10.816 CC lib/util/base64.o 00:03:10.816 CC lib/util/bit_array.o 00:03:10.816 CC lib/util/crc16.o 00:03:10.816 CC lib/util/cpuset.o 00:03:10.816 CC lib/util/crc32.o 00:03:10.816 CC lib/util/crc32c.o 00:03:10.816 CC lib/util/crc32_ieee.o 00:03:10.816 CC lib/dma/dma.o 00:03:10.816 CC lib/util/crc64.o 00:03:10.816 CC lib/util/dif.o 00:03:10.816 CC lib/util/fd.o 00:03:11.074 CC lib/util/fd_group.o 00:03:11.074 CC lib/util/file.o 00:03:11.074 CC lib/util/hexlify.o 00:03:11.074 CC lib/util/iov.o 00:03:11.074 CC lib/util/math.o 00:03:11.074 CC lib/util/net.o 00:03:11.074 CC lib/util/pipe.o 00:03:11.074 CC lib/util/strerror_tls.o 00:03:11.074 CC lib/util/string.o 00:03:11.074 CC lib/util/uuid.o 00:03:11.074 CC lib/util/xor.o 00:03:11.074 CC lib/util/zipf.o 00:03:11.074 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.074 CC lib/vfio_user/host/vfio_user.o 00:03:11.333 LIB libspdk_dma.a 00:03:11.333 SO libspdk_dma.so.4.0 00:03:11.333 LIB libspdk_ioat.a 00:03:11.333 SYMLINK libspdk_dma.so 00:03:11.333 SO libspdk_ioat.so.7.0 00:03:11.333 SYMLINK libspdk_ioat.so 00:03:11.333 LIB libspdk_vfio_user.a 00:03:11.591 SO libspdk_vfio_user.so.5.0 00:03:11.591 SYMLINK libspdk_vfio_user.so 00:03:11.591 LIB libspdk_util.a 00:03:11.927 SO libspdk_util.so.10.0 00:03:11.927 SYMLINK libspdk_util.so 00:03:12.185 CC lib/env_dpdk/env.o 00:03:12.185 CC lib/env_dpdk/memory.o 00:03:12.185 CC lib/env_dpdk/pci.o 00:03:12.185 CC lib/env_dpdk/init.o 00:03:12.185 CC lib/env_dpdk/threads.o 00:03:12.185 CC lib/env_dpdk/pci_ioat.o 00:03:12.185 CC lib/env_dpdk/pci_virtio.o 00:03:12.185 CC lib/env_dpdk/pci_vmd.o 00:03:12.185 CC lib/env_dpdk/pci_idxd.o 00:03:12.185 CC lib/env_dpdk/pci_event.o 00:03:12.185 CC lib/env_dpdk/sigbus_handler.o 00:03:12.185 CC lib/env_dpdk/pci_dpdk.o 00:03:12.185 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:12.185 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:12.185 CC lib/conf/conf.o 00:03:12.185 CC lib/idxd/idxd.o 00:03:12.185 CC lib/idxd/idxd_user.o 00:03:12.185 CC lib/idxd/idxd_kernel.o 00:03:12.185 CC lib/rdma_utils/rdma_utils.o 00:03:12.185 CC lib/rdma_provider/common.o 00:03:12.185 CC lib/json/json_parse.o 00:03:12.185 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:12.185 CC lib/json/json_util.o 00:03:12.185 CC lib/json/json_write.o 00:03:12.185 CC lib/vmd/vmd.o 00:03:12.185 CC lib/vmd/led.o 00:03:12.445 LIB libspdk_conf.a 00:03:12.445 LIB libspdk_trace_parser.a 00:03:12.445 SO libspdk_conf.so.6.0 00:03:12.445 SO libspdk_trace_parser.so.5.0 00:03:12.445 SYMLINK libspdk_conf.so 00:03:12.445 LIB libspdk_rdma_provider.a 00:03:12.445 LIB libspdk_rdma_utils.a 00:03:12.445 SYMLINK libspdk_trace_parser.so 00:03:12.445 SO libspdk_rdma_utils.so.1.0 00:03:12.445 SO libspdk_rdma_provider.so.6.0 00:03:12.704 LIB libspdk_json.a 00:03:12.704 SYMLINK libspdk_rdma_utils.so 00:03:12.704 SYMLINK libspdk_rdma_provider.so 00:03:12.704 SO libspdk_json.so.6.0 00:03:12.704 SYMLINK libspdk_json.so 00:03:12.704 LIB libspdk_idxd.a 00:03:12.704 SO libspdk_idxd.so.12.0 00:03:12.963 SYMLINK libspdk_idxd.so 00:03:12.963 CC lib/jsonrpc/jsonrpc_server.o 00:03:12.963 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:12.963 CC lib/jsonrpc/jsonrpc_client.o 00:03:12.963 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:13.222 LIB libspdk_vmd.a 00:03:13.222 SO libspdk_vmd.so.6.0 00:03:13.222 SYMLINK libspdk_vmd.so 00:03:13.222 LIB libspdk_jsonrpc.a 00:03:13.222 SO libspdk_jsonrpc.so.6.0 00:03:13.481 SYMLINK libspdk_jsonrpc.so 00:03:13.739 CC lib/rpc/rpc.o 00:03:13.997 LIB libspdk_rpc.a 00:03:14.256 SO libspdk_rpc.so.6.0 00:03:14.256 SYMLINK libspdk_rpc.so 00:03:14.515 CC lib/trace/trace.o 00:03:14.515 CC lib/trace/trace_flags.o 00:03:14.515 CC lib/trace/trace_rpc.o 00:03:14.515 CC lib/keyring/keyring.o 00:03:14.515 CC lib/keyring/keyring_rpc.o 00:03:14.515 CC lib/notify/notify.o 00:03:14.515 CC lib/notify/notify_rpc.o 00:03:14.515 LIB libspdk_env_dpdk.a 00:03:14.515 SO libspdk_env_dpdk.so.15.0 00:03:14.515 LIB libspdk_notify.a 00:03:14.774 SO libspdk_notify.so.6.0 00:03:14.774 LIB libspdk_keyring.a 00:03:14.774 SO libspdk_keyring.so.1.0 00:03:14.774 SYMLINK libspdk_notify.so 00:03:14.774 LIB libspdk_trace.a 00:03:14.774 SYMLINK libspdk_keyring.so 00:03:14.774 SO libspdk_trace.so.10.0 00:03:14.774 SYMLINK libspdk_env_dpdk.so 00:03:15.032 SYMLINK libspdk_trace.so 00:03:15.032 CC lib/sock/sock.o 00:03:15.032 CC lib/sock/sock_rpc.o 00:03:15.032 CC lib/thread/thread.o 00:03:15.032 CC lib/thread/iobuf.o 00:03:15.967 LIB libspdk_sock.a 00:03:15.967 SO libspdk_sock.so.10.0 00:03:15.967 SYMLINK libspdk_sock.so 00:03:16.226 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.226 CC lib/nvme/nvme_ctrlr.o 00:03:16.226 CC lib/nvme/nvme_fabric.o 00:03:16.226 CC lib/nvme/nvme_ns_cmd.o 00:03:16.226 CC lib/nvme/nvme_ns.o 00:03:16.226 CC lib/nvme/nvme_pcie_common.o 00:03:16.226 CC lib/nvme/nvme_pcie.o 00:03:16.226 CC lib/nvme/nvme_qpair.o 00:03:16.226 CC lib/nvme/nvme.o 00:03:16.226 CC lib/nvme/nvme_quirks.o 00:03:16.226 CC lib/nvme/nvme_transport.o 00:03:16.226 CC lib/nvme/nvme_discovery.o 00:03:16.226 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:16.226 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:16.226 CC lib/nvme/nvme_tcp.o 00:03:16.226 CC lib/nvme/nvme_opal.o 00:03:16.226 CC lib/nvme/nvme_io_msg.o 00:03:16.226 CC lib/nvme/nvme_poll_group.o 00:03:16.226 CC lib/nvme/nvme_zns.o 00:03:16.226 CC lib/nvme/nvme_stubs.o 00:03:16.226 CC lib/nvme/nvme_auth.o 00:03:16.226 CC lib/nvme/nvme_cuse.o 00:03:16.226 CC lib/nvme/nvme_vfio_user.o 00:03:16.226 CC lib/nvme/nvme_rdma.o 00:03:17.160 LIB libspdk_thread.a 00:03:17.160 SO libspdk_thread.so.10.1 00:03:17.160 SYMLINK libspdk_thread.so 00:03:17.418 CC lib/accel/accel.o 00:03:17.418 CC lib/virtio/virtio.o 00:03:17.418 CC lib/vfu_tgt/tgt_endpoint.o 00:03:17.418 CC lib/virtio/virtio_vhost_user.o 00:03:17.418 CC lib/accel/accel_rpc.o 00:03:17.418 CC lib/vfu_tgt/tgt_rpc.o 00:03:17.418 CC lib/accel/accel_sw.o 00:03:17.418 CC lib/init/json_config.o 00:03:17.418 CC lib/virtio/virtio_vfio_user.o 00:03:17.418 CC lib/init/subsystem.o 00:03:17.418 CC lib/virtio/virtio_pci.o 00:03:17.418 CC lib/init/subsystem_rpc.o 00:03:17.418 CC lib/blob/blobstore.o 00:03:17.418 CC lib/init/rpc.o 00:03:17.418 CC lib/blob/request.o 00:03:17.418 CC lib/blob/zeroes.o 00:03:17.418 CC lib/blob/blob_bs_dev.o 00:03:17.677 LIB libspdk_init.a 00:03:17.677 SO libspdk_init.so.5.0 00:03:17.677 LIB libspdk_vfu_tgt.a 00:03:17.677 LIB libspdk_virtio.a 00:03:17.677 SO libspdk_vfu_tgt.so.3.0 00:03:17.677 SYMLINK libspdk_init.so 00:03:17.677 SO libspdk_virtio.so.7.0 00:03:17.936 SYMLINK libspdk_vfu_tgt.so 00:03:17.936 SYMLINK libspdk_virtio.so 00:03:17.936 CC lib/event/app.o 00:03:17.936 CC lib/event/reactor.o 00:03:17.936 CC lib/event/log_rpc.o 00:03:17.936 CC lib/event/app_rpc.o 00:03:17.936 CC lib/event/scheduler_static.o 00:03:18.872 LIB libspdk_event.a 00:03:18.872 LIB libspdk_accel.a 00:03:18.872 SO libspdk_event.so.14.0 00:03:18.872 SO libspdk_accel.so.16.0 00:03:18.872 SYMLINK libspdk_event.so 00:03:18.872 SYMLINK libspdk_accel.so 00:03:18.872 LIB libspdk_nvme.a 00:03:19.130 CC lib/bdev/bdev.o 00:03:19.130 CC lib/bdev/bdev_rpc.o 00:03:19.130 CC lib/bdev/part.o 00:03:19.130 CC lib/bdev/bdev_zone.o 00:03:19.130 CC lib/bdev/scsi_nvme.o 00:03:19.130 SO libspdk_nvme.so.13.1 00:03:19.701 SYMLINK libspdk_nvme.so 00:03:23.047 LIB libspdk_blob.a 00:03:23.047 SO libspdk_blob.so.11.0 00:03:23.305 SYMLINK libspdk_blob.so 00:03:23.563 CC lib/lvol/lvol.o 00:03:23.563 CC lib/blobfs/blobfs.o 00:03:23.563 CC lib/blobfs/tree.o 00:03:24.536 LIB libspdk_bdev.a 00:03:24.536 SO libspdk_bdev.so.16.0 00:03:24.536 LIB libspdk_blobfs.a 00:03:24.536 SO libspdk_blobfs.so.10.0 00:03:24.536 SYMLINK libspdk_bdev.so 00:03:24.536 SYMLINK libspdk_blobfs.so 00:03:24.802 LIB libspdk_lvol.a 00:03:24.802 SO libspdk_lvol.so.10.0 00:03:24.802 CC lib/scsi/dev.o 00:03:24.802 CC lib/nbd/nbd.o 00:03:24.802 CC lib/scsi/lun.o 00:03:24.802 CC lib/nvmf/ctrlr.o 00:03:24.802 CC lib/scsi/port.o 00:03:24.802 CC lib/nbd/nbd_rpc.o 00:03:24.802 CC lib/nvmf/ctrlr_discovery.o 00:03:24.802 CC lib/scsi/scsi.o 00:03:24.802 CC lib/ftl/ftl_core.o 00:03:24.802 CC lib/ftl/ftl_init.o 00:03:24.802 CC lib/scsi/scsi_bdev.o 00:03:24.802 CC lib/nvmf/ctrlr_bdev.o 00:03:24.802 CC lib/ftl/ftl_layout.o 00:03:24.802 CC lib/scsi/scsi_pr.o 00:03:24.802 CC lib/ftl/ftl_debug.o 00:03:24.802 CC lib/scsi/scsi_rpc.o 00:03:24.802 CC lib/nvmf/subsystem.o 00:03:24.802 CC lib/scsi/task.o 00:03:24.802 CC lib/nvmf/nvmf.o 00:03:24.802 CC lib/ftl/ftl_io.o 00:03:24.802 CC lib/ftl/ftl_sb.o 00:03:24.802 CC lib/ftl/ftl_l2p.o 00:03:24.802 SYMLINK libspdk_lvol.so 00:03:24.802 CC lib/nvmf/nvmf_rpc.o 00:03:24.802 CC lib/ftl/ftl_l2p_flat.o 00:03:24.802 CC lib/nvmf/transport.o 00:03:24.802 CC lib/nvmf/tcp.o 00:03:24.802 CC lib/ftl/ftl_nv_cache.o 00:03:24.802 CC lib/nvmf/stubs.o 00:03:24.802 CC lib/ftl/ftl_band.o 00:03:24.802 CC lib/nvmf/mdns_server.o 00:03:24.802 CC lib/ftl/ftl_band_ops.o 00:03:24.802 CC lib/ftl/ftl_writer.o 00:03:24.802 CC lib/nvmf/vfio_user.o 00:03:24.802 CC lib/ftl/ftl_rq.o 00:03:24.803 CC lib/nvmf/rdma.o 00:03:24.803 CC lib/ftl/ftl_reloc.o 00:03:24.803 CC lib/nvmf/auth.o 00:03:24.803 CC lib/ftl/ftl_l2p_cache.o 00:03:24.803 CC lib/ublk/ublk.o 00:03:24.803 CC lib/ftl/ftl_p2l.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.803 CC lib/ublk/ublk_rpc.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.803 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.061 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.061 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.324 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.324 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.324 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.324 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.324 CC lib/ftl/utils/ftl_conf.o 00:03:25.324 CC lib/ftl/utils/ftl_md.o 00:03:25.324 CC lib/ftl/utils/ftl_mempool.o 00:03:25.324 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.324 CC lib/ftl/utils/ftl_property.o 00:03:25.324 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:25.324 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:25.324 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:25.324 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:25.324 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:25.324 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:25.324 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:25.324 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:25.324 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.324 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.583 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.583 CC lib/ftl/base/ftl_base_dev.o 00:03:25.584 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.584 CC lib/ftl/ftl_trace.o 00:03:25.584 LIB libspdk_nbd.a 00:03:25.584 SO libspdk_nbd.so.7.0 00:03:25.842 SYMLINK libspdk_nbd.so 00:03:25.842 LIB libspdk_scsi.a 00:03:25.842 SO libspdk_scsi.so.9.0 00:03:25.842 SYMLINK libspdk_scsi.so 00:03:26.101 LIB libspdk_ublk.a 00:03:26.101 SO libspdk_ublk.so.3.0 00:03:26.101 CC lib/iscsi/conn.o 00:03:26.101 CC lib/vhost/vhost.o 00:03:26.101 CC lib/iscsi/init_grp.o 00:03:26.101 CC lib/vhost/vhost_rpc.o 00:03:26.101 CC lib/iscsi/iscsi.o 00:03:26.101 CC lib/iscsi/md5.o 00:03:26.101 CC lib/vhost/vhost_scsi.o 00:03:26.101 CC lib/vhost/vhost_blk.o 00:03:26.101 CC lib/iscsi/param.o 00:03:26.101 CC lib/iscsi/portal_grp.o 00:03:26.101 CC lib/vhost/rte_vhost_user.o 00:03:26.101 CC lib/iscsi/tgt_node.o 00:03:26.101 CC lib/iscsi/iscsi_subsystem.o 00:03:26.101 CC lib/iscsi/iscsi_rpc.o 00:03:26.101 CC lib/iscsi/task.o 00:03:26.101 SYMLINK libspdk_ublk.so 00:03:26.360 LIB libspdk_ftl.a 00:03:26.618 SO libspdk_ftl.so.9.0 00:03:26.876 SYMLINK libspdk_ftl.so 00:03:27.444 LIB libspdk_nvmf.a 00:03:27.444 LIB libspdk_vhost.a 00:03:27.702 SO libspdk_vhost.so.8.0 00:03:27.702 SO libspdk_nvmf.so.19.0 00:03:27.960 SYMLINK libspdk_vhost.so 00:03:27.960 LIB libspdk_iscsi.a 00:03:27.960 SO libspdk_iscsi.so.8.0 00:03:27.960 SYMLINK libspdk_nvmf.so 00:03:28.218 SYMLINK libspdk_iscsi.so 00:03:28.477 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.477 CC module/vfu_device/vfu_virtio.o 00:03:28.477 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.477 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.477 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.736 CC module/accel/dsa/accel_dsa.o 00:03:28.736 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.736 CC module/accel/iaa/accel_iaa.o 00:03:28.736 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.736 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.736 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.736 CC module/sock/posix/posix.o 00:03:28.736 CC module/accel/error/accel_error.o 00:03:28.736 CC module/keyring/linux/keyring.o 00:03:28.736 CC module/accel/error/accel_error_rpc.o 00:03:28.736 CC module/keyring/linux/keyring_rpc.o 00:03:28.736 CC module/blob/bdev/blob_bdev.o 00:03:28.736 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.736 CC module/accel/ioat/accel_ioat.o 00:03:28.736 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.736 CC module/keyring/file/keyring.o 00:03:28.736 CC module/keyring/file/keyring_rpc.o 00:03:28.736 LIB libspdk_env_dpdk_rpc.a 00:03:28.736 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.736 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.736 LIB libspdk_scheduler_gscheduler.a 00:03:28.736 LIB libspdk_keyring_linux.a 00:03:28.736 LIB libspdk_keyring_file.a 00:03:28.736 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.994 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.994 SO libspdk_keyring_linux.so.1.0 00:03:28.994 SO libspdk_keyring_file.so.1.0 00:03:28.994 LIB libspdk_accel_ioat.a 00:03:28.994 LIB libspdk_accel_error.a 00:03:28.994 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.994 LIB libspdk_accel_iaa.a 00:03:28.994 SO libspdk_accel_ioat.so.6.0 00:03:28.994 SO libspdk_accel_error.so.2.0 00:03:28.994 LIB libspdk_scheduler_dynamic.a 00:03:28.994 SO libspdk_accel_iaa.so.3.0 00:03:28.994 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.994 SYMLINK libspdk_keyring_linux.so 00:03:28.994 SYMLINK libspdk_keyring_file.so 00:03:28.994 LIB libspdk_accel_dsa.a 00:03:28.994 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.994 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.994 SO libspdk_accel_dsa.so.5.0 00:03:28.994 SYMLINK libspdk_accel_error.so 00:03:28.994 LIB libspdk_blob_bdev.a 00:03:28.994 SYMLINK libspdk_accel_iaa.so 00:03:28.994 SYMLINK libspdk_accel_ioat.so 00:03:28.994 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.994 SO libspdk_blob_bdev.so.11.0 00:03:28.994 SYMLINK libspdk_accel_dsa.so 00:03:28.994 SYMLINK libspdk_blob_bdev.so 00:03:29.561 CC module/bdev/nvme/bdev_nvme.o 00:03:29.561 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.561 CC module/bdev/nvme/nvme_rpc.o 00:03:29.561 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.561 CC module/bdev/nvme/vbdev_opal.o 00:03:29.561 CC module/bdev/aio/bdev_aio.o 00:03:29.561 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.561 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.561 CC module/bdev/malloc/bdev_malloc.o 00:03:29.561 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.561 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.561 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.561 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.561 CC module/bdev/raid/bdev_raid.o 00:03:29.561 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.561 CC module/bdev/delay/vbdev_delay.o 00:03:29.561 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.561 CC module/bdev/error/vbdev_error.o 00:03:29.561 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.561 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.561 CC module/bdev/raid/raid0.o 00:03:29.561 CC module/bdev/raid/raid1.o 00:03:29.561 CC module/bdev/null/bdev_null.o 00:03:29.561 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.561 CC module/bdev/null/bdev_null_rpc.o 00:03:29.561 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.561 CC module/bdev/gpt/gpt.o 00:03:29.561 CC module/bdev/raid/concat.o 00:03:29.561 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.561 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.561 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.561 CC module/bdev/split/vbdev_split.o 00:03:29.561 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.561 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.561 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.561 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.561 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.561 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.561 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.561 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.561 LIB libspdk_vfu_device.a 00:03:29.561 CC module/bdev/ftl/bdev_ftl.o 00:03:29.561 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.561 SO libspdk_vfu_device.so.3.0 00:03:29.561 SYMLINK libspdk_vfu_device.so 00:03:29.561 LIB libspdk_sock_posix.a 00:03:29.819 SO libspdk_sock_posix.so.6.0 00:03:29.819 SYMLINK libspdk_sock_posix.so 00:03:29.819 LIB libspdk_blobfs_bdev.a 00:03:29.819 SO libspdk_blobfs_bdev.so.6.0 00:03:29.819 SYMLINK libspdk_blobfs_bdev.so 00:03:29.819 LIB libspdk_bdev_split.a 00:03:29.819 LIB libspdk_bdev_null.a 00:03:29.819 LIB libspdk_bdev_error.a 00:03:29.819 LIB libspdk_bdev_gpt.a 00:03:29.819 SO libspdk_bdev_split.so.6.0 00:03:29.819 LIB libspdk_bdev_delay.a 00:03:29.819 SO libspdk_bdev_null.so.6.0 00:03:30.076 SO libspdk_bdev_gpt.so.6.0 00:03:30.076 SO libspdk_bdev_error.so.6.0 00:03:30.076 LIB libspdk_bdev_aio.a 00:03:30.076 SO libspdk_bdev_delay.so.6.0 00:03:30.076 LIB libspdk_bdev_passthru.a 00:03:30.076 LIB libspdk_bdev_ftl.a 00:03:30.076 SYMLINK libspdk_bdev_split.so 00:03:30.076 SO libspdk_bdev_aio.so.6.0 00:03:30.076 SO libspdk_bdev_passthru.so.6.0 00:03:30.076 SO libspdk_bdev_ftl.so.6.0 00:03:30.076 SYMLINK libspdk_bdev_null.so 00:03:30.076 SYMLINK libspdk_bdev_error.so 00:03:30.076 SYMLINK libspdk_bdev_gpt.so 00:03:30.076 SYMLINK libspdk_bdev_delay.so 00:03:30.076 LIB libspdk_bdev_iscsi.a 00:03:30.076 LIB libspdk_bdev_zone_block.a 00:03:30.076 SYMLINK libspdk_bdev_aio.so 00:03:30.076 SYMLINK libspdk_bdev_passthru.so 00:03:30.076 SYMLINK libspdk_bdev_ftl.so 00:03:30.076 SO libspdk_bdev_iscsi.so.6.0 00:03:30.076 SO libspdk_bdev_zone_block.so.6.0 00:03:30.076 LIB libspdk_bdev_malloc.a 00:03:30.076 SO libspdk_bdev_malloc.so.6.0 00:03:30.076 SYMLINK libspdk_bdev_zone_block.so 00:03:30.076 SYMLINK libspdk_bdev_iscsi.so 00:03:30.076 LIB libspdk_bdev_lvol.a 00:03:30.076 SYMLINK libspdk_bdev_malloc.so 00:03:30.076 SO libspdk_bdev_lvol.so.6.0 00:03:30.334 SYMLINK libspdk_bdev_lvol.so 00:03:30.334 LIB libspdk_bdev_virtio.a 00:03:30.334 SO libspdk_bdev_virtio.so.6.0 00:03:30.334 SYMLINK libspdk_bdev_virtio.so 00:03:30.900 LIB libspdk_bdev_raid.a 00:03:30.900 SO libspdk_bdev_raid.so.6.0 00:03:31.159 SYMLINK libspdk_bdev_raid.so 00:03:34.445 LIB libspdk_bdev_nvme.a 00:03:34.445 SO libspdk_bdev_nvme.so.7.0 00:03:34.445 SYMLINK libspdk_bdev_nvme.so 00:03:34.445 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:34.445 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.445 CC module/event/subsystems/keyring/keyring.o 00:03:34.445 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.445 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.445 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.445 CC module/event/subsystems/sock/sock.o 00:03:34.445 CC module/event/subsystems/vmd/vmd.o 00:03:34.445 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.703 LIB libspdk_event_keyring.a 00:03:34.703 LIB libspdk_event_vfu_tgt.a 00:03:34.703 LIB libspdk_event_vmd.a 00:03:34.703 LIB libspdk_event_sock.a 00:03:34.703 LIB libspdk_event_vhost_blk.a 00:03:34.703 SO libspdk_event_vfu_tgt.so.3.0 00:03:34.703 SO libspdk_event_keyring.so.1.0 00:03:34.703 LIB libspdk_event_scheduler.a 00:03:34.703 SO libspdk_event_sock.so.5.0 00:03:34.703 SO libspdk_event_vmd.so.6.0 00:03:34.703 SO libspdk_event_vhost_blk.so.3.0 00:03:34.703 LIB libspdk_event_iobuf.a 00:03:34.703 SO libspdk_event_scheduler.so.4.0 00:03:34.703 SO libspdk_event_iobuf.so.3.0 00:03:34.703 SYMLINK libspdk_event_vfu_tgt.so 00:03:34.703 SYMLINK libspdk_event_keyring.so 00:03:34.703 SYMLINK libspdk_event_sock.so 00:03:34.962 SYMLINK libspdk_event_vmd.so 00:03:34.962 SYMLINK libspdk_event_vhost_blk.so 00:03:34.962 SYMLINK libspdk_event_scheduler.so 00:03:34.962 SYMLINK libspdk_event_iobuf.so 00:03:35.220 CC module/event/subsystems/accel/accel.o 00:03:35.479 LIB libspdk_event_accel.a 00:03:35.479 SO libspdk_event_accel.so.6.0 00:03:35.479 SYMLINK libspdk_event_accel.so 00:03:35.738 CC module/event/subsystems/bdev/bdev.o 00:03:36.303 LIB libspdk_event_bdev.a 00:03:36.303 SO libspdk_event_bdev.so.6.0 00:03:36.303 SYMLINK libspdk_event_bdev.so 00:03:36.561 CC module/event/subsystems/nbd/nbd.o 00:03:36.561 CC module/event/subsystems/scsi/scsi.o 00:03:36.561 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.561 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.561 CC module/event/subsystems/ublk/ublk.o 00:03:36.820 LIB libspdk_event_nbd.a 00:03:36.820 LIB libspdk_event_ublk.a 00:03:36.820 SO libspdk_event_nbd.so.6.0 00:03:36.820 LIB libspdk_event_scsi.a 00:03:36.820 SO libspdk_event_ublk.so.3.0 00:03:36.820 SO libspdk_event_scsi.so.6.0 00:03:36.820 SYMLINK libspdk_event_nbd.so 00:03:36.820 SYMLINK libspdk_event_ublk.so 00:03:36.820 SYMLINK libspdk_event_scsi.so 00:03:36.820 LIB libspdk_event_nvmf.a 00:03:36.820 SO libspdk_event_nvmf.so.6.0 00:03:37.078 SYMLINK libspdk_event_nvmf.so 00:03:37.078 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.078 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.337 LIB libspdk_event_vhost_scsi.a 00:03:37.595 LIB libspdk_event_iscsi.a 00:03:37.595 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.595 SO libspdk_event_iscsi.so.6.0 00:03:37.595 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.595 SYMLINK libspdk_event_iscsi.so 00:03:37.853 SO libspdk.so.6.0 00:03:37.853 SYMLINK libspdk.so 00:03:37.853 CC app/trace_record/trace_record.o 00:03:38.117 CC test/rpc_client/rpc_client_test.o 00:03:38.117 TEST_HEADER include/spdk/accel.h 00:03:38.117 TEST_HEADER include/spdk/accel_module.h 00:03:38.117 CXX app/trace/trace.o 00:03:38.117 CC app/spdk_lspci/spdk_lspci.o 00:03:38.117 TEST_HEADER include/spdk/barrier.h 00:03:38.117 TEST_HEADER include/spdk/assert.h 00:03:38.117 TEST_HEADER include/spdk/base64.h 00:03:38.117 TEST_HEADER include/spdk/bdev.h 00:03:38.117 TEST_HEADER include/spdk/bdev_module.h 00:03:38.117 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.117 TEST_HEADER include/spdk/bit_array.h 00:03:38.117 CC app/spdk_nvme_identify/identify.o 00:03:38.117 TEST_HEADER include/spdk/bit_pool.h 00:03:38.117 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.117 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.117 CC app/spdk_top/spdk_top.o 00:03:38.117 TEST_HEADER include/spdk/blobfs.h 00:03:38.117 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.117 CC app/spdk_nvme_perf/perf.o 00:03:38.117 TEST_HEADER include/spdk/blob.h 00:03:38.117 TEST_HEADER include/spdk/conf.h 00:03:38.117 TEST_HEADER include/spdk/config.h 00:03:38.117 TEST_HEADER include/spdk/cpuset.h 00:03:38.117 TEST_HEADER include/spdk/crc16.h 00:03:38.117 TEST_HEADER include/spdk/crc32.h 00:03:38.117 TEST_HEADER include/spdk/crc64.h 00:03:38.117 TEST_HEADER include/spdk/dif.h 00:03:38.117 TEST_HEADER include/spdk/dma.h 00:03:38.117 TEST_HEADER include/spdk/endian.h 00:03:38.117 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.117 TEST_HEADER include/spdk/env.h 00:03:38.117 TEST_HEADER include/spdk/event.h 00:03:38.117 TEST_HEADER include/spdk/fd_group.h 00:03:38.117 TEST_HEADER include/spdk/fd.h 00:03:38.117 TEST_HEADER include/spdk/file.h 00:03:38.117 TEST_HEADER include/spdk/ftl.h 00:03:38.117 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.117 TEST_HEADER include/spdk/hexlify.h 00:03:38.117 TEST_HEADER include/spdk/histogram_data.h 00:03:38.117 TEST_HEADER include/spdk/idxd.h 00:03:38.117 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.117 TEST_HEADER include/spdk/init.h 00:03:38.117 TEST_HEADER include/spdk/ioat.h 00:03:38.117 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.117 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.117 TEST_HEADER include/spdk/json.h 00:03:38.117 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.117 TEST_HEADER include/spdk/keyring.h 00:03:38.117 TEST_HEADER include/spdk/keyring_module.h 00:03:38.117 TEST_HEADER include/spdk/likely.h 00:03:38.117 TEST_HEADER include/spdk/log.h 00:03:38.117 TEST_HEADER include/spdk/lvol.h 00:03:38.117 TEST_HEADER include/spdk/memory.h 00:03:38.117 TEST_HEADER include/spdk/mmio.h 00:03:38.117 TEST_HEADER include/spdk/nbd.h 00:03:38.117 TEST_HEADER include/spdk/net.h 00:03:38.117 TEST_HEADER include/spdk/notify.h 00:03:38.117 TEST_HEADER include/spdk/nvme.h 00:03:38.117 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.117 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.117 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.117 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.117 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.117 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.117 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.117 TEST_HEADER include/spdk/nvmf.h 00:03:38.117 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.117 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.117 TEST_HEADER include/spdk/opal.h 00:03:38.117 TEST_HEADER include/spdk/pci_ids.h 00:03:38.117 TEST_HEADER include/spdk/opal_spec.h 00:03:38.117 TEST_HEADER include/spdk/pipe.h 00:03:38.117 TEST_HEADER include/spdk/queue.h 00:03:38.117 TEST_HEADER include/spdk/reduce.h 00:03:38.117 TEST_HEADER include/spdk/rpc.h 00:03:38.117 TEST_HEADER include/spdk/scheduler.h 00:03:38.117 TEST_HEADER include/spdk/scsi.h 00:03:38.118 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.118 TEST_HEADER include/spdk/sock.h 00:03:38.118 TEST_HEADER include/spdk/stdinc.h 00:03:38.118 TEST_HEADER include/spdk/string.h 00:03:38.118 TEST_HEADER include/spdk/thread.h 00:03:38.118 TEST_HEADER include/spdk/trace.h 00:03:38.118 TEST_HEADER include/spdk/tree.h 00:03:38.118 TEST_HEADER include/spdk/trace_parser.h 00:03:38.118 TEST_HEADER include/spdk/ublk.h 00:03:38.118 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:38.118 TEST_HEADER include/spdk/uuid.h 00:03:38.118 TEST_HEADER include/spdk/util.h 00:03:38.118 TEST_HEADER include/spdk/version.h 00:03:38.118 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.118 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.118 TEST_HEADER include/spdk/vhost.h 00:03:38.118 TEST_HEADER include/spdk/vmd.h 00:03:38.118 TEST_HEADER include/spdk/xor.h 00:03:38.118 TEST_HEADER include/spdk/zipf.h 00:03:38.118 CXX test/cpp_headers/accel.o 00:03:38.118 CXX test/cpp_headers/accel_module.o 00:03:38.118 CXX test/cpp_headers/assert.o 00:03:38.118 CXX test/cpp_headers/barrier.o 00:03:38.118 CXX test/cpp_headers/base64.o 00:03:38.118 CXX test/cpp_headers/bdev.o 00:03:38.118 CXX test/cpp_headers/bdev_module.o 00:03:38.118 CXX test/cpp_headers/bdev_zone.o 00:03:38.118 CXX test/cpp_headers/bit_array.o 00:03:38.118 CXX test/cpp_headers/bit_pool.o 00:03:38.118 CXX test/cpp_headers/blob_bdev.o 00:03:38.118 CXX test/cpp_headers/blobfs_bdev.o 00:03:38.118 CXX test/cpp_headers/blobfs.o 00:03:38.118 CXX test/cpp_headers/blob.o 00:03:38.118 CXX test/cpp_headers/conf.o 00:03:38.118 CXX test/cpp_headers/config.o 00:03:38.118 CXX test/cpp_headers/cpuset.o 00:03:38.118 CXX test/cpp_headers/crc16.o 00:03:38.118 CC app/spdk_dd/spdk_dd.o 00:03:38.118 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.118 CC app/nvmf_tgt/nvmf_main.o 00:03:38.118 CXX test/cpp_headers/crc32.o 00:03:38.118 CC app/spdk_tgt/spdk_tgt.o 00:03:38.118 CC test/app/histogram_perf/histogram_perf.o 00:03:38.118 CC examples/util/zipf/zipf.o 00:03:38.118 CC test/thread/poller_perf/poller_perf.o 00:03:38.118 CC test/app/stub/stub.o 00:03:38.118 CC test/env/memory/memory_ut.o 00:03:38.118 CC app/fio/nvme/fio_plugin.o 00:03:38.118 CC test/env/pci/pci_ut.o 00:03:38.118 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:38.118 CC examples/ioat/perf/perf.o 00:03:38.118 CC test/env/vtophys/vtophys.o 00:03:38.118 CC test/app/jsoncat/jsoncat.o 00:03:38.118 CC examples/ioat/verify/verify.o 00:03:38.118 CC test/dma/test_dma/test_dma.o 00:03:38.118 CC test/app/bdev_svc/bdev_svc.o 00:03:38.401 CC app/fio/bdev/fio_plugin.o 00:03:38.401 LINK spdk_lspci 00:03:38.401 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.401 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.401 LINK rpc_client_test 00:03:38.401 LINK spdk_nvme_discover 00:03:38.401 LINK interrupt_tgt 00:03:38.401 LINK histogram_perf 00:03:38.401 LINK poller_perf 00:03:38.401 LINK jsoncat 00:03:38.401 LINK zipf 00:03:38.691 CXX test/cpp_headers/crc64.o 00:03:38.691 CXX test/cpp_headers/dif.o 00:03:38.691 CXX test/cpp_headers/dma.o 00:03:38.691 LINK vtophys 00:03:38.691 LINK nvmf_tgt 00:03:38.691 LINK spdk_trace_record 00:03:38.691 CXX test/cpp_headers/endian.o 00:03:38.691 CXX test/cpp_headers/env_dpdk.o 00:03:38.691 CXX test/cpp_headers/env.o 00:03:38.691 CXX test/cpp_headers/event.o 00:03:38.691 LINK stub 00:03:38.691 LINK env_dpdk_post_init 00:03:38.691 CXX test/cpp_headers/fd_group.o 00:03:38.691 CXX test/cpp_headers/fd.o 00:03:38.691 CXX test/cpp_headers/file.o 00:03:38.691 CXX test/cpp_headers/ftl.o 00:03:38.691 CXX test/cpp_headers/gpt_spec.o 00:03:38.691 LINK iscsi_tgt 00:03:38.691 CXX test/cpp_headers/hexlify.o 00:03:38.691 CXX test/cpp_headers/histogram_data.o 00:03:38.691 CXX test/cpp_headers/idxd.o 00:03:38.691 CXX test/cpp_headers/idxd_spec.o 00:03:38.691 LINK spdk_tgt 00:03:38.691 LINK bdev_svc 00:03:38.691 LINK verify 00:03:38.691 LINK ioat_perf 00:03:38.691 CXX test/cpp_headers/init.o 00:03:38.691 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.691 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.691 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.959 CXX test/cpp_headers/ioat.o 00:03:38.959 CXX test/cpp_headers/ioat_spec.o 00:03:38.959 CXX test/cpp_headers/iscsi_spec.o 00:03:38.959 CXX test/cpp_headers/json.o 00:03:38.959 CXX test/cpp_headers/jsonrpc.o 00:03:38.959 LINK spdk_dd 00:03:38.959 LINK pci_ut 00:03:38.959 LINK spdk_trace 00:03:38.959 CXX test/cpp_headers/keyring.o 00:03:38.959 CXX test/cpp_headers/keyring_module.o 00:03:38.959 CXX test/cpp_headers/likely.o 00:03:38.959 CXX test/cpp_headers/log.o 00:03:38.959 CXX test/cpp_headers/lvol.o 00:03:38.959 CXX test/cpp_headers/memory.o 00:03:38.959 CXX test/cpp_headers/mmio.o 00:03:38.959 CXX test/cpp_headers/nbd.o 00:03:38.959 CXX test/cpp_headers/net.o 00:03:38.959 CXX test/cpp_headers/notify.o 00:03:38.959 CXX test/cpp_headers/nvme.o 00:03:38.959 CXX test/cpp_headers/nvme_intel.o 00:03:38.959 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.959 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.959 CXX test/cpp_headers/nvme_spec.o 00:03:38.959 CXX test/cpp_headers/nvme_zns.o 00:03:38.959 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.959 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.959 CXX test/cpp_headers/nvmf.o 00:03:38.959 LINK test_dma 00:03:38.959 CXX test/cpp_headers/nvmf_spec.o 00:03:38.959 CXX test/cpp_headers/nvmf_transport.o 00:03:38.959 CXX test/cpp_headers/opal.o 00:03:39.217 CXX test/cpp_headers/opal_spec.o 00:03:39.217 LINK nvme_fuzz 00:03:39.217 CC test/event/event_perf/event_perf.o 00:03:39.217 CXX test/cpp_headers/pci_ids.o 00:03:39.217 CXX test/cpp_headers/pipe.o 00:03:39.217 CC test/event/reactor/reactor.o 00:03:39.217 CC examples/sock/hello_world/hello_sock.o 00:03:39.217 LINK spdk_bdev 00:03:39.217 CC test/event/reactor_perf/reactor_perf.o 00:03:39.217 CC examples/idxd/perf/perf.o 00:03:39.217 CXX test/cpp_headers/queue.o 00:03:39.217 CC examples/vmd/lsvmd/lsvmd.o 00:03:39.217 CC examples/thread/thread/thread_ex.o 00:03:39.217 LINK spdk_nvme 00:03:39.217 CXX test/cpp_headers/reduce.o 00:03:39.475 CXX test/cpp_headers/rpc.o 00:03:39.475 CXX test/cpp_headers/scheduler.o 00:03:39.475 CC test/event/app_repeat/app_repeat.o 00:03:39.475 CC examples/vmd/led/led.o 00:03:39.475 CXX test/cpp_headers/scsi.o 00:03:39.475 CXX test/cpp_headers/scsi_spec.o 00:03:39.475 CXX test/cpp_headers/sock.o 00:03:39.475 CXX test/cpp_headers/stdinc.o 00:03:39.475 CXX test/cpp_headers/string.o 00:03:39.475 CXX test/cpp_headers/thread.o 00:03:39.475 CXX test/cpp_headers/trace.o 00:03:39.475 CXX test/cpp_headers/trace_parser.o 00:03:39.475 CXX test/cpp_headers/tree.o 00:03:39.475 CXX test/cpp_headers/ublk.o 00:03:39.475 CXX test/cpp_headers/util.o 00:03:39.475 CXX test/cpp_headers/uuid.o 00:03:39.475 CXX test/cpp_headers/version.o 00:03:39.475 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.475 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.475 CXX test/cpp_headers/vhost.o 00:03:39.475 CXX test/cpp_headers/vmd.o 00:03:39.475 CC test/event/scheduler/scheduler.o 00:03:39.475 CXX test/cpp_headers/xor.o 00:03:39.475 CXX test/cpp_headers/zipf.o 00:03:39.475 LINK vhost_fuzz 00:03:39.475 LINK event_perf 00:03:39.475 LINK reactor_perf 00:03:39.475 LINK lsvmd 00:03:39.475 LINK reactor 00:03:39.741 CC app/vhost/vhost.o 00:03:39.741 LINK spdk_nvme_perf 00:03:39.741 LINK mem_callbacks 00:03:39.741 LINK app_repeat 00:03:39.741 LINK led 00:03:39.741 LINK spdk_nvme_identify 00:03:39.741 LINK hello_sock 00:03:39.741 LINK spdk_top 00:03:39.741 LINK thread 00:03:40.001 CC test/nvme/sgl/sgl.o 00:03:40.001 CC test/nvme/e2edp/nvme_dp.o 00:03:40.001 CC test/nvme/reset/reset.o 00:03:40.001 CC test/nvme/aer/aer.o 00:03:40.001 CC test/nvme/overhead/overhead.o 00:03:40.001 CC test/nvme/simple_copy/simple_copy.o 00:03:40.001 CC test/nvme/reserve/reserve.o 00:03:40.001 CC test/nvme/err_injection/err_injection.o 00:03:40.001 CC test/nvme/startup/startup.o 00:03:40.001 CC test/accel/dif/dif.o 00:03:40.001 CC test/blobfs/mkfs/mkfs.o 00:03:40.001 LINK idxd_perf 00:03:40.001 CC test/nvme/connect_stress/connect_stress.o 00:03:40.001 CC test/nvme/boot_partition/boot_partition.o 00:03:40.001 CC test/nvme/compliance/nvme_compliance.o 00:03:40.001 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.001 CC test/lvol/esnap/esnap.o 00:03:40.001 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.001 CC test/nvme/cuse/cuse.o 00:03:40.001 CC test/nvme/fdp/fdp.o 00:03:40.001 LINK vhost 00:03:40.001 LINK scheduler 00:03:40.259 LINK err_injection 00:03:40.259 LINK boot_partition 00:03:40.259 LINK startup 00:03:40.259 LINK sgl 00:03:40.259 LINK memory_ut 00:03:40.259 LINK fused_ordering 00:03:40.259 LINK simple_copy 00:03:40.259 LINK nvme_dp 00:03:40.259 LINK reserve 00:03:40.259 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:40.259 CC examples/nvme/hello_world/hello_world.o 00:03:40.259 LINK aer 00:03:40.259 CC examples/nvme/abort/abort.o 00:03:40.259 CC examples/nvme/reconnect/reconnect.o 00:03:40.259 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.259 CC examples/nvme/arbitration/arbitration.o 00:03:40.259 CC examples/nvme/hotplug/hotplug.o 00:03:40.259 LINK connect_stress 00:03:40.259 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.259 LINK mkfs 00:03:40.259 LINK doorbell_aers 00:03:40.259 LINK overhead 00:03:40.259 LINK reset 00:03:40.259 CC examples/accel/perf/accel_perf.o 00:03:40.517 LINK nvme_compliance 00:03:40.517 CC examples/blob/hello_world/hello_blob.o 00:03:40.517 CC examples/blob/cli/blobcli.o 00:03:40.517 LINK fdp 00:03:40.517 LINK cmb_copy 00:03:40.517 LINK dif 00:03:40.517 LINK pmr_persistence 00:03:40.517 LINK hotplug 00:03:40.775 LINK hello_world 00:03:40.775 LINK abort 00:03:40.775 LINK arbitration 00:03:40.775 LINK hello_blob 00:03:41.033 LINK reconnect 00:03:41.033 LINK nvme_manage 00:03:41.033 LINK accel_perf 00:03:41.033 CC test/bdev/bdevio/bdevio.o 00:03:41.033 LINK blobcli 00:03:41.291 LINK iscsi_fuzz 00:03:41.559 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.559 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.559 LINK bdevio 00:03:41.817 LINK cuse 00:03:42.075 LINK hello_bdev 00:03:42.642 LINK bdevperf 00:03:43.576 CC examples/nvmf/nvmf/nvmf.o 00:03:44.142 LINK nvmf 00:03:46.671 LINK esnap 00:03:47.236 00:03:47.236 real 1m9.521s 00:03:47.236 user 11m12.668s 00:03:47.236 sys 2m43.782s 00:03:47.236 11:11:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:47.236 11:11:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:47.236 ************************************ 00:03:47.236 END TEST make 00:03:47.236 ************************************ 00:03:47.236 11:11:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:47.236 11:11:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:47.236 11:11:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:47.236 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.236 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:47.236 11:11:42 -- pm/common@44 -- $ pid=1899536 00:03:47.236 11:11:42 -- pm/common@50 -- $ kill -TERM 1899536 00:03:47.236 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.236 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:47.236 11:11:42 -- pm/common@44 -- $ pid=1899538 00:03:47.236 11:11:42 -- pm/common@50 -- $ kill -TERM 1899538 00:03:47.236 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.236 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:47.236 11:11:42 -- pm/common@44 -- $ pid=1899540 00:03:47.236 11:11:42 -- pm/common@50 -- $ kill -TERM 1899540 00:03:47.236 11:11:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.236 11:11:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:47.236 11:11:42 -- pm/common@44 -- $ pid=1899570 00:03:47.236 11:11:42 -- pm/common@50 -- $ sudo -E kill -TERM 1899570 00:03:47.495 11:11:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:47.495 11:11:43 -- nvmf/common.sh@7 -- # uname -s 00:03:47.495 11:11:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:47.495 11:11:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:47.495 11:11:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:47.495 11:11:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:47.495 11:11:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:47.495 11:11:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:47.495 11:11:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:47.495 11:11:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:47.495 11:11:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:47.495 11:11:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:47.495 11:11:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:47.495 11:11:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:47.495 11:11:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:47.495 11:11:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:47.495 11:11:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:47.495 11:11:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:47.495 11:11:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:47.495 11:11:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:47.495 11:11:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:47.495 11:11:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:47.495 11:11:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.495 11:11:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.495 11:11:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.496 11:11:43 -- paths/export.sh@5 -- # export PATH 00:03:47.496 11:11:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:47.496 11:11:43 -- nvmf/common.sh@47 -- # : 0 00:03:47.496 11:11:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:47.496 11:11:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:47.496 11:11:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:47.496 11:11:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:47.496 11:11:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:47.496 11:11:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:47.496 11:11:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:47.496 11:11:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:47.496 11:11:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:47.496 11:11:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:47.496 11:11:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:47.496 11:11:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:47.496 11:11:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:47.496 11:11:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:47.496 11:11:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:47.496 11:11:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:47.496 11:11:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:47.496 11:11:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:47.496 11:11:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1958193 00:03:47.496 11:11:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:47.496 11:11:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:47.496 11:11:43 -- pm/common@17 -- # local monitor 00:03:47.496 11:11:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.496 11:11:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.496 11:11:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.496 11:11:43 -- pm/common@21 -- # date +%s 00:03:47.496 11:11:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:47.496 11:11:43 -- pm/common@21 -- # date +%s 00:03:47.496 11:11:43 -- pm/common@25 -- # sleep 1 00:03:47.496 11:11:43 -- pm/common@21 -- # date +%s 00:03:47.496 11:11:43 -- pm/common@21 -- # date +%s 00:03:47.496 11:11:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985103 00:03:47.496 11:11:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985103 00:03:47.496 11:11:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985103 00:03:47.496 11:11:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721985103 00:03:47.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985103_collect-vmstat.pm.log 00:03:47.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985103_collect-cpu-load.pm.log 00:03:47.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985103_collect-cpu-temp.pm.log 00:03:47.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721985103_collect-bmc-pm.bmc.pm.log 00:03:48.429 11:11:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:48.429 11:11:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:48.429 11:11:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.429 11:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.429 11:11:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:48.429 11:11:44 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:48.429 11:11:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.686 11:11:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:48.686 11:11:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:48.686 11:11:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:48.686 11:11:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:48.686 11:11:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:48.686 11:11:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:48.686 11:11:44 -- common/autotest_common.sh@1455 -- # uname 00:03:48.686 11:11:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:48.686 11:11:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:48.686 11:11:44 -- common/autotest_common.sh@1475 -- # uname 00:03:48.686 11:11:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:48.686 11:11:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:48.686 11:11:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:48.686 11:11:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:48.686 11:11:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:48.686 11:11:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:48.686 --rc lcov_branch_coverage=1 00:03:48.686 --rc lcov_function_coverage=1 00:03:48.686 --rc genhtml_branch_coverage=1 00:03:48.686 --rc genhtml_function_coverage=1 00:03:48.686 --rc genhtml_legend=1 00:03:48.686 --rc geninfo_all_blocks=1 00:03:48.686 ' 00:03:48.686 11:11:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:48.686 --rc lcov_branch_coverage=1 00:03:48.686 --rc lcov_function_coverage=1 00:03:48.686 --rc genhtml_branch_coverage=1 00:03:48.686 --rc genhtml_function_coverage=1 00:03:48.686 --rc genhtml_legend=1 00:03:48.686 --rc geninfo_all_blocks=1 00:03:48.686 ' 00:03:48.686 11:11:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:48.686 --rc lcov_branch_coverage=1 00:03:48.686 --rc lcov_function_coverage=1 00:03:48.686 --rc genhtml_branch_coverage=1 00:03:48.686 --rc genhtml_function_coverage=1 00:03:48.686 --rc genhtml_legend=1 00:03:48.686 --rc geninfo_all_blocks=1 00:03:48.686 --no-external' 00:03:48.686 11:11:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:48.686 --rc lcov_branch_coverage=1 00:03:48.686 --rc lcov_function_coverage=1 00:03:48.686 --rc genhtml_branch_coverage=1 00:03:48.686 --rc genhtml_function_coverage=1 00:03:48.686 --rc genhtml_legend=1 00:03:48.686 --rc geninfo_all_blocks=1 00:03:48.686 --no-external' 00:03:48.686 11:11:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:48.686 lcov: LCOV version 1.14 00:03:48.686 11:11:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:15.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:15.266 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:33.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:33.337 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:33.338 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:33.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:33.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:33.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:39.898 11:12:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:39.898 11:12:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.898 11:12:34 -- common/autotest_common.sh@10 -- # set +x 00:04:39.898 11:12:34 -- spdk/autotest.sh@91 -- # rm -f 00:04:39.898 11:12:34 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.465 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:40.465 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:40.465 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:40.465 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:40.465 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:40.465 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:40.465 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:40.465 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:40.465 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:40.465 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:40.465 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:40.465 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:40.726 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:40.726 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:40.726 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:40.726 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:40.726 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:40.726 11:12:36 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:40.726 11:12:36 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:40.726 11:12:36 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:40.726 11:12:36 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:40.726 11:12:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:40.726 11:12:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:40.726 11:12:36 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:40.726 11:12:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.726 11:12:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:40.726 11:12:36 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:40.726 11:12:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:40.726 11:12:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:40.726 11:12:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:40.726 11:12:36 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:40.726 11:12:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:40.984 No valid GPT data, bailing 00:04:40.984 11:12:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:40.984 11:12:36 -- scripts/common.sh@391 -- # pt= 00:04:40.984 11:12:36 -- scripts/common.sh@392 -- # return 1 00:04:40.984 11:12:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:40.984 1+0 records in 00:04:40.984 1+0 records out 00:04:40.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369844 s, 284 MB/s 00:04:40.985 11:12:36 -- spdk/autotest.sh@118 -- # sync 00:04:40.985 11:12:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:40.985 11:12:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:40.985 11:12:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:43.518 11:12:38 -- spdk/autotest.sh@124 -- # uname -s 00:04:43.518 11:12:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:43.518 11:12:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:43.518 11:12:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.518 11:12:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.518 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.518 ************************************ 00:04:43.518 START TEST setup.sh 00:04:43.518 ************************************ 00:04:43.518 11:12:38 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:43.518 * Looking for test storage... 00:04:43.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:43.518 11:12:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:43.518 11:12:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:43.518 11:12:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:43.518 11:12:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.518 11:12:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.518 11:12:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.518 ************************************ 00:04:43.518 START TEST acl 00:04:43.518 ************************************ 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:43.518 * Looking for test storage... 00:04:43.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:43.518 11:12:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.518 11:12:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:43.518 11:12:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:43.518 11:12:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:43.518 11:12:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:43.518 11:12:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:43.518 11:12:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:43.518 11:12:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.518 11:12:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.422 11:12:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:45.422 11:12:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:45.422 11:12:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:45.422 11:12:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:45.422 11:12:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.422 11:12:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:46.803 Hugepages 00:04:46.803 node hugesize free / total 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 00:04:46.803 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:46.803 11:12:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:46.803 11:12:42 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.803 11:12:42 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.803 11:12:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:46.803 ************************************ 00:04:46.803 START TEST denied 00:04:46.803 ************************************ 00:04:46.803 11:12:42 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:46.803 11:12:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:46.803 11:12:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:46.803 11:12:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:46.803 11:12:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.803 11:12:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.738 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.738 11:12:44 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.029 00:04:52.029 real 0m4.635s 00:04:52.029 user 0m1.421s 00:04:52.029 sys 0m2.419s 00:04:52.029 11:12:47 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.029 11:12:47 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:52.029 ************************************ 00:04:52.029 END TEST denied 00:04:52.029 ************************************ 00:04:52.029 11:12:47 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:52.029 11:12:47 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.029 11:12:47 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.029 11:12:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:52.029 ************************************ 00:04:52.030 START TEST allowed 00:04:52.030 ************************************ 00:04:52.030 11:12:47 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:52.030 11:12:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:52.030 11:12:47 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:52.030 11:12:47 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:52.030 11:12:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.030 11:12:47 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.564 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.564 11:12:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:54.564 11:12:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:54.564 11:12:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:54.564 11:12:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.564 11:12:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.468 00:04:56.468 real 0m4.639s 00:04:56.468 user 0m1.313s 00:04:56.468 sys 0m2.226s 00:04:56.468 11:12:51 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.468 11:12:51 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:56.468 ************************************ 00:04:56.468 END TEST allowed 00:04:56.468 ************************************ 00:04:56.468 00:04:56.468 real 0m13.025s 00:04:56.468 user 0m4.176s 00:04:56.468 sys 0m7.054s 00:04:56.468 11:12:51 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.468 11:12:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:56.468 ************************************ 00:04:56.468 END TEST acl 00:04:56.468 ************************************ 00:04:56.468 11:12:51 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:56.468 11:12:51 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.468 11:12:51 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.468 11:12:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:56.468 ************************************ 00:04:56.468 START TEST hugepages 00:04:56.468 ************************************ 00:04:56.468 11:12:51 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:56.468 * Looking for test storage... 00:04:56.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 27201012 kB' 'MemAvailable: 30783328 kB' 'Buffers: 3736 kB' 'Cached: 10182952 kB' 'SwapCached: 0 kB' 'Active: 7195700 kB' 'Inactive: 3507860 kB' 'Active(anon): 6800876 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520128 kB' 'Mapped: 178716 kB' 'Shmem: 6284004 kB' 'KReclaimable: 183340 kB' 'Slab: 538856 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355516 kB' 'KernelStack: 12384 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304780 kB' 'Committed_AS: 7932844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.468 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.469 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:56.470 11:12:51 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:56.470 11:12:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.470 11:12:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.470 11:12:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.470 ************************************ 00:04:56.470 START TEST default_setup 00:04:56.470 ************************************ 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.470 11:12:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.896 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:57.896 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:57.896 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:58.158 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:58.158 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:58.158 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:58.158 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:58.158 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:58.158 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:59.096 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29302696 kB' 'MemAvailable: 32885012 kB' 'Buffers: 3736 kB' 'Cached: 10183044 kB' 'SwapCached: 0 kB' 'Active: 7213660 kB' 'Inactive: 3507860 kB' 'Active(anon): 6818836 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538384 kB' 'Mapped: 178840 kB' 'Shmem: 6284096 kB' 'KReclaimable: 183340 kB' 'Slab: 538276 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 354936 kB' 'KernelStack: 12544 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7949904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.096 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29301836 kB' 'MemAvailable: 32884152 kB' 'Buffers: 3736 kB' 'Cached: 10183048 kB' 'SwapCached: 0 kB' 'Active: 7213824 kB' 'Inactive: 3507860 kB' 'Active(anon): 6819000 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538148 kB' 'Mapped: 178740 kB' 'Shmem: 6284100 kB' 'KReclaimable: 183340 kB' 'Slab: 538340 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355000 kB' 'KernelStack: 12432 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7950132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.097 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.098 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.099 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.361 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.362 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29301360 kB' 'MemAvailable: 32883676 kB' 'Buffers: 3736 kB' 'Cached: 10183064 kB' 'SwapCached: 0 kB' 'Active: 7213876 kB' 'Inactive: 3507860 kB' 'Active(anon): 6819052 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538148 kB' 'Mapped: 178740 kB' 'Shmem: 6284116 kB' 'KReclaimable: 183340 kB' 'Slab: 538340 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355000 kB' 'KernelStack: 12432 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7950152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.363 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.364 nr_hugepages=1024 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.364 resv_hugepages=0 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.364 surplus_hugepages=0 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.364 anon_hugepages=0 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.364 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29301108 kB' 'MemAvailable: 32883424 kB' 'Buffers: 3736 kB' 'Cached: 10183104 kB' 'SwapCached: 0 kB' 'Active: 7213512 kB' 'Inactive: 3507860 kB' 'Active(anon): 6818688 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537744 kB' 'Mapped: 178740 kB' 'Shmem: 6284156 kB' 'KReclaimable: 183340 kB' 'Slab: 538340 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355000 kB' 'KernelStack: 12416 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7950176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.365 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.366 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12520600 kB' 'MemUsed: 12098812 kB' 'SwapCached: 0 kB' 'Active: 5786320 kB' 'Inactive: 3329964 kB' 'Active(anon): 5527432 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798908 kB' 'Mapped: 98492 kB' 'AnonPages: 320556 kB' 'Shmem: 5210056 kB' 'KernelStack: 7800 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298320 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.367 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.368 node0=1024 expecting 1024 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.368 00:04:59.368 real 0m2.918s 00:04:59.368 user 0m0.867s 00:04:59.368 sys 0m1.183s 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.368 11:12:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:59.368 ************************************ 00:04:59.368 END TEST default_setup 00:04:59.368 ************************************ 00:04:59.368 11:12:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:59.368 11:12:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.368 11:12:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.368 11:12:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.368 ************************************ 00:04:59.368 START TEST per_node_1G_alloc 00:04:59.368 ************************************ 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.368 11:12:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.277 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.277 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:01.277 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.277 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.277 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.277 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.277 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.277 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.277 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.277 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.277 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.277 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.277 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.277 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.277 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.277 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.277 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.277 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29302532 kB' 'MemAvailable: 32884848 kB' 'Buffers: 3736 kB' 'Cached: 10183164 kB' 'SwapCached: 0 kB' 'Active: 7214208 kB' 'Inactive: 3507860 kB' 'Active(anon): 6819384 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538392 kB' 'Mapped: 178844 kB' 'Shmem: 6284216 kB' 'KReclaimable: 183340 kB' 'Slab: 538352 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355012 kB' 'KernelStack: 12432 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7950184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.278 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29305124 kB' 'MemAvailable: 32887440 kB' 'Buffers: 3736 kB' 'Cached: 10183168 kB' 'SwapCached: 0 kB' 'Active: 7214116 kB' 'Inactive: 3507860 kB' 'Active(anon): 6819292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538300 kB' 'Mapped: 178752 kB' 'Shmem: 6284220 kB' 'KReclaimable: 183340 kB' 'Slab: 538344 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355004 kB' 'KernelStack: 12464 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7950204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.279 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.280 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29313100 kB' 'MemAvailable: 32895416 kB' 'Buffers: 3736 kB' 'Cached: 10183184 kB' 'SwapCached: 0 kB' 'Active: 7214064 kB' 'Inactive: 3507860 kB' 'Active(anon): 6819240 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538272 kB' 'Mapped: 178752 kB' 'Shmem: 6284236 kB' 'KReclaimable: 183340 kB' 'Slab: 538416 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355076 kB' 'KernelStack: 12464 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7950224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.281 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.282 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.283 nr_hugepages=1024 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.283 resv_hugepages=0 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.283 surplus_hugepages=0 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.283 anon_hugepages=0 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.283 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29313380 kB' 'MemAvailable: 32895696 kB' 'Buffers: 3736 kB' 'Cached: 10183208 kB' 'SwapCached: 0 kB' 'Active: 7217140 kB' 'Inactive: 3507860 kB' 'Active(anon): 6822316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541332 kB' 'Mapped: 179188 kB' 'Shmem: 6284260 kB' 'KReclaimable: 183340 kB' 'Slab: 538420 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355080 kB' 'KernelStack: 12464 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7953980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.284 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.285 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13564560 kB' 'MemUsed: 11054852 kB' 'SwapCached: 0 kB' 'Active: 5787736 kB' 'Inactive: 3329964 kB' 'Active(anon): 5528848 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798924 kB' 'Mapped: 98492 kB' 'AnonPages: 321992 kB' 'Shmem: 5210072 kB' 'KernelStack: 7896 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298324 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.286 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15748364 kB' 'MemUsed: 3658880 kB' 'SwapCached: 0 kB' 'Active: 1427040 kB' 'Inactive: 177896 kB' 'Active(anon): 1291104 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 177896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1388064 kB' 'Mapped: 80712 kB' 'AnonPages: 216928 kB' 'Shmem: 1074232 kB' 'KernelStack: 4584 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64536 kB' 'Slab: 240088 kB' 'SReclaimable: 64536 kB' 'SUnreclaim: 175552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.287 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.288 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.547 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.548 node0=512 expecting 512 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:01.548 node1=512 expecting 512 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:01.548 00:05:01.548 real 0m1.949s 00:05:01.548 user 0m0.852s 00:05:01.548 sys 0m1.080s 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.548 11:12:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.548 ************************************ 00:05:01.548 END TEST per_node_1G_alloc 00:05:01.548 ************************************ 00:05:01.548 11:12:56 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:01.548 11:12:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.548 11:12:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.548 11:12:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.548 ************************************ 00:05:01.548 START TEST even_2G_alloc 00:05:01.548 ************************************ 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.548 11:12:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.925 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.925 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:02.925 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.925 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.925 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.925 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.925 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.925 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.925 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.925 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.925 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.925 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.925 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.925 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.925 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.925 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.925 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29319608 kB' 'MemAvailable: 32901924 kB' 'Buffers: 3736 kB' 'Cached: 10183296 kB' 'SwapCached: 0 kB' 'Active: 7210692 kB' 'Inactive: 3507860 kB' 'Active(anon): 6815868 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534700 kB' 'Mapped: 177872 kB' 'Shmem: 6284348 kB' 'KReclaimable: 183340 kB' 'Slab: 538836 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355496 kB' 'KernelStack: 12320 kB' 'PageTables: 7572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7937792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.188 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.189 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29319788 kB' 'MemAvailable: 32902104 kB' 'Buffers: 3736 kB' 'Cached: 10183300 kB' 'SwapCached: 0 kB' 'Active: 7210984 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816160 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535052 kB' 'Mapped: 177804 kB' 'Shmem: 6284352 kB' 'KReclaimable: 183340 kB' 'Slab: 538804 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355464 kB' 'KernelStack: 12352 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7937812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.190 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.191 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29319656 kB' 'MemAvailable: 32901972 kB' 'Buffers: 3736 kB' 'Cached: 10183324 kB' 'SwapCached: 0 kB' 'Active: 7211324 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535316 kB' 'Mapped: 177728 kB' 'Shmem: 6284376 kB' 'KReclaimable: 183340 kB' 'Slab: 538788 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355448 kB' 'KernelStack: 12400 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7938204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.192 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.193 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.194 nr_hugepages=1024 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.194 resv_hugepages=0 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.194 surplus_hugepages=0 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.194 anon_hugepages=0 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29319164 kB' 'MemAvailable: 32901480 kB' 'Buffers: 3736 kB' 'Cached: 10183348 kB' 'SwapCached: 0 kB' 'Active: 7211316 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816492 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535336 kB' 'Mapped: 177728 kB' 'Shmem: 6284400 kB' 'KReclaimable: 183340 kB' 'Slab: 538788 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355448 kB' 'KernelStack: 12400 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7938228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.194 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.195 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13572692 kB' 'MemUsed: 11046720 kB' 'SwapCached: 0 kB' 'Active: 5784876 kB' 'Inactive: 3329964 kB' 'Active(anon): 5525988 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798948 kB' 'Mapped: 97760 kB' 'AnonPages: 319024 kB' 'Shmem: 5210096 kB' 'KernelStack: 7784 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298420 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.196 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.197 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15746472 kB' 'MemUsed: 3660772 kB' 'SwapCached: 0 kB' 'Active: 1426276 kB' 'Inactive: 177896 kB' 'Active(anon): 1290340 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 177896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1388176 kB' 'Mapped: 79968 kB' 'AnonPages: 216084 kB' 'Shmem: 1074344 kB' 'KernelStack: 4600 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64536 kB' 'Slab: 240368 kB' 'SReclaimable: 64536 kB' 'SUnreclaim: 175832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.457 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.458 node0=512 expecting 512 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.458 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:03.459 node1=512 expecting 512 00:05:03.459 11:12:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.459 00:05:03.459 real 0m1.856s 00:05:03.459 user 0m0.734s 00:05:03.459 sys 0m1.101s 00:05:03.459 11:12:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.459 11:12:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.459 ************************************ 00:05:03.459 END TEST even_2G_alloc 00:05:03.459 ************************************ 00:05:03.459 11:12:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.459 11:12:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.459 11:12:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.459 11:12:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.459 ************************************ 00:05:03.459 START TEST odd_alloc 00:05:03.459 ************************************ 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.459 11:12:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:04.846 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.846 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.846 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.846 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.846 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.846 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.846 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.846 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.846 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:04.846 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:04.846 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:04.846 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:04.846 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:04.846 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:04.846 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:04.846 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:04.846 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296940 kB' 'MemAvailable: 32879256 kB' 'Buffers: 3736 kB' 'Cached: 10183436 kB' 'SwapCached: 0 kB' 'Active: 7211900 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817076 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535740 kB' 'Mapped: 177824 kB' 'Shmem: 6284488 kB' 'KReclaimable: 183340 kB' 'Slab: 538604 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355264 kB' 'KernelStack: 12416 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7938600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.109 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296940 kB' 'MemAvailable: 32879256 kB' 'Buffers: 3736 kB' 'Cached: 10183440 kB' 'SwapCached: 0 kB' 'Active: 7211604 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816780 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535484 kB' 'Mapped: 177816 kB' 'Shmem: 6284492 kB' 'KReclaimable: 183340 kB' 'Slab: 538644 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355304 kB' 'KernelStack: 12400 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7938620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.110 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.111 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296940 kB' 'MemAvailable: 32879256 kB' 'Buffers: 3736 kB' 'Cached: 10183456 kB' 'SwapCached: 0 kB' 'Active: 7211700 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816876 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535544 kB' 'Mapped: 177740 kB' 'Shmem: 6284508 kB' 'KReclaimable: 183340 kB' 'Slab: 538644 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355304 kB' 'KernelStack: 12432 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7938640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.112 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.113 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:05.114 nr_hugepages=1025 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.114 resv_hugepages=0 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.114 surplus_hugepages=0 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.114 anon_hugepages=0 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.114 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296940 kB' 'MemAvailable: 32879256 kB' 'Buffers: 3736 kB' 'Cached: 10183476 kB' 'SwapCached: 0 kB' 'Active: 7211732 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816908 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535544 kB' 'Mapped: 177740 kB' 'Shmem: 6284528 kB' 'KReclaimable: 183340 kB' 'Slab: 538644 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355304 kB' 'KernelStack: 12432 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7938660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.115 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.376 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13563196 kB' 'MemUsed: 11056216 kB' 'SwapCached: 0 kB' 'Active: 5785260 kB' 'Inactive: 3329964 kB' 'Active(anon): 5526372 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798952 kB' 'Mapped: 97760 kB' 'AnonPages: 319396 kB' 'Shmem: 5210100 kB' 'KernelStack: 7832 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298400 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.377 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.378 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15733832 kB' 'MemUsed: 3673412 kB' 'SwapCached: 0 kB' 'Active: 1426476 kB' 'Inactive: 177896 kB' 'Active(anon): 1290540 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 177896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1388304 kB' 'Mapped: 79980 kB' 'AnonPages: 216148 kB' 'Shmem: 1074472 kB' 'KernelStack: 4600 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64536 kB' 'Slab: 240244 kB' 'SReclaimable: 64536 kB' 'SUnreclaim: 175708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.379 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:05.380 node0=512 expecting 513 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:05.380 node1=513 expecting 512 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:05.380 00:05:05.380 real 0m1.944s 00:05:05.380 user 0m0.809s 00:05:05.380 sys 0m1.117s 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.380 11:13:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:05.380 ************************************ 00:05:05.380 END TEST odd_alloc 00:05:05.380 ************************************ 00:05:05.380 11:13:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:05.380 11:13:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.380 11:13:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.380 11:13:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:05.380 ************************************ 00:05:05.380 START TEST custom_alloc 00:05:05.380 ************************************ 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:05.380 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.381 11:13:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.758 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.758 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.758 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.758 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.758 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.758 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.758 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.758 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.758 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:06.758 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:06.758 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:06.758 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:06.758 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:06.758 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:06.758 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:06.758 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:06.758 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28236092 kB' 'MemAvailable: 31818408 kB' 'Buffers: 3736 kB' 'Cached: 10183576 kB' 'SwapCached: 0 kB' 'Active: 7211788 kB' 'Inactive: 3507860 kB' 'Active(anon): 6816964 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535532 kB' 'Mapped: 177756 kB' 'Shmem: 6284628 kB' 'KReclaimable: 183340 kB' 'Slab: 538396 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355056 kB' 'KernelStack: 12432 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7938868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.020 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.021 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28236240 kB' 'MemAvailable: 31818556 kB' 'Buffers: 3736 kB' 'Cached: 10183580 kB' 'SwapCached: 0 kB' 'Active: 7212004 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535744 kB' 'Mapped: 177756 kB' 'Shmem: 6284632 kB' 'KReclaimable: 183340 kB' 'Slab: 538380 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355040 kB' 'KernelStack: 12464 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7938888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.022 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.023 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28236268 kB' 'MemAvailable: 31818584 kB' 'Buffers: 3736 kB' 'Cached: 10183596 kB' 'SwapCached: 0 kB' 'Active: 7212000 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817176 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535716 kB' 'Mapped: 177756 kB' 'Shmem: 6284648 kB' 'KReclaimable: 183340 kB' 'Slab: 538408 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355068 kB' 'KernelStack: 12464 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7938908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.024 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.288 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:07.289 nr_hugepages=1536 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.289 resv_hugepages=0 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.289 surplus_hugepages=0 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.289 anon_hugepages=0 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28236016 kB' 'MemAvailable: 31818332 kB' 'Buffers: 3736 kB' 'Cached: 10183596 kB' 'SwapCached: 0 kB' 'Active: 7211916 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817092 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535652 kB' 'Mapped: 177756 kB' 'Shmem: 6284648 kB' 'KReclaimable: 183340 kB' 'Slab: 538408 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355068 kB' 'KernelStack: 12464 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7939924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.289 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.290 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13557776 kB' 'MemUsed: 11061636 kB' 'SwapCached: 0 kB' 'Active: 5785736 kB' 'Inactive: 3329964 kB' 'Active(anon): 5526848 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798968 kB' 'Mapped: 97760 kB' 'AnonPages: 319920 kB' 'Shmem: 5210116 kB' 'KernelStack: 7864 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298380 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.291 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.292 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 14678820 kB' 'MemUsed: 4728424 kB' 'SwapCached: 0 kB' 'Active: 1427116 kB' 'Inactive: 177896 kB' 'Active(anon): 1291180 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 177896 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1388412 kB' 'Mapped: 80448 kB' 'AnonPages: 216672 kB' 'Shmem: 1074580 kB' 'KernelStack: 4744 kB' 'PageTables: 4000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64536 kB' 'Slab: 240028 kB' 'SReclaimable: 64536 kB' 'SUnreclaim: 175492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.293 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:07.294 node0=512 expecting 512 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:07.294 node1=1024 expecting 1024 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:07.294 00:05:07.294 real 0m1.879s 00:05:07.294 user 0m0.795s 00:05:07.294 sys 0m1.064s 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.294 11:13:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.294 ************************************ 00:05:07.294 END TEST custom_alloc 00:05:07.294 ************************************ 00:05:07.294 11:13:02 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:07.294 11:13:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.294 11:13:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.294 11:13:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.294 ************************************ 00:05:07.294 START TEST no_shrink_alloc 00:05:07.294 ************************************ 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.294 11:13:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.671 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.671 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.671 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.671 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.671 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.671 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.671 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.671 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.671 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.671 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:08.671 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:08.671 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:08.671 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:08.671 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:08.671 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:08.671 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:08.671 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.934 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29284808 kB' 'MemAvailable: 32867124 kB' 'Buffers: 3736 kB' 'Cached: 10183700 kB' 'SwapCached: 0 kB' 'Active: 7211920 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535532 kB' 'Mapped: 177756 kB' 'Shmem: 6284752 kB' 'KReclaimable: 183340 kB' 'Slab: 538424 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355084 kB' 'KernelStack: 12464 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7938884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.935 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29284992 kB' 'MemAvailable: 32867308 kB' 'Buffers: 3736 kB' 'Cached: 10183704 kB' 'SwapCached: 0 kB' 'Active: 7211980 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817156 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535580 kB' 'Mapped: 177708 kB' 'Shmem: 6284756 kB' 'KReclaimable: 183340 kB' 'Slab: 538416 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355076 kB' 'KernelStack: 12432 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7938900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.936 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.937 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29285356 kB' 'MemAvailable: 32867672 kB' 'Buffers: 3736 kB' 'Cached: 10183720 kB' 'SwapCached: 0 kB' 'Active: 7211908 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535516 kB' 'Mapped: 177708 kB' 'Shmem: 6284772 kB' 'KReclaimable: 183340 kB' 'Slab: 538468 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355128 kB' 'KernelStack: 12416 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7938924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.938 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.939 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:08.940 nr_hugepages=1024 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:08.940 resv_hugepages=0 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:08.940 surplus_hugepages=0 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:08.940 anon_hugepages=0 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.940 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29285356 kB' 'MemAvailable: 32867672 kB' 'Buffers: 3736 kB' 'Cached: 10183744 kB' 'SwapCached: 0 kB' 'Active: 7212324 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817500 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535892 kB' 'Mapped: 177768 kB' 'Shmem: 6284796 kB' 'KReclaimable: 183340 kB' 'Slab: 538468 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355128 kB' 'KernelStack: 12464 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7939312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.941 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:08.942 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12517400 kB' 'MemUsed: 12102012 kB' 'SwapCached: 0 kB' 'Active: 5785976 kB' 'Inactive: 3329964 kB' 'Active(anon): 5527088 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798976 kB' 'Mapped: 97760 kB' 'AnonPages: 320136 kB' 'Shmem: 5210124 kB' 'KernelStack: 7848 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298396 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:08.943 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.203 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.204 node0=1024 expecting 1024 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.204 11:13:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.585 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.585 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:10.585 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.585 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.585 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.585 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.585 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.585 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.585 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.585 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:10.585 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:10.585 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:10.585 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:10.585 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:10.585 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:10.585 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:10.585 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:10.585 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.585 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29280920 kB' 'MemAvailable: 32863236 kB' 'Buffers: 3736 kB' 'Cached: 10183816 kB' 'SwapCached: 0 kB' 'Active: 7212460 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535928 kB' 'Mapped: 177872 kB' 'Shmem: 6284868 kB' 'KReclaimable: 183340 kB' 'Slab: 538480 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355140 kB' 'KernelStack: 12480 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7939496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.586 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29280920 kB' 'MemAvailable: 32863236 kB' 'Buffers: 3736 kB' 'Cached: 10183820 kB' 'SwapCached: 0 kB' 'Active: 7212524 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817700 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535996 kB' 'Mapped: 177776 kB' 'Shmem: 6284872 kB' 'KReclaimable: 183340 kB' 'Slab: 538476 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355136 kB' 'KernelStack: 12480 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7939512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.587 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.588 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29280920 kB' 'MemAvailable: 32863236 kB' 'Buffers: 3736 kB' 'Cached: 10183840 kB' 'SwapCached: 0 kB' 'Active: 7212720 kB' 'Inactive: 3507860 kB' 'Active(anon): 6817896 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536168 kB' 'Mapped: 177776 kB' 'Shmem: 6284892 kB' 'KReclaimable: 183340 kB' 'Slab: 538476 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355136 kB' 'KernelStack: 12480 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.589 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.590 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.851 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.852 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.853 nr_hugepages=1024 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.853 resv_hugepages=0 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.853 surplus_hugepages=0 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.853 anon_hugepages=0 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29293964 kB' 'MemAvailable: 32876280 kB' 'Buffers: 3736 kB' 'Cached: 10183840 kB' 'SwapCached: 0 kB' 'Active: 7212892 kB' 'Inactive: 3507860 kB' 'Active(anon): 6818068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3507860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536444 kB' 'Mapped: 177776 kB' 'Shmem: 6284892 kB' 'KReclaimable: 183340 kB' 'Slab: 538468 kB' 'SReclaimable: 183340 kB' 'SUnreclaim: 355128 kB' 'KernelStack: 12624 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1793628 kB' 'DirectMap2M: 14903296 kB' 'DirectMap1G: 35651584 kB' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.853 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.854 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12527368 kB' 'MemUsed: 12092044 kB' 'SwapCached: 0 kB' 'Active: 5787092 kB' 'Inactive: 3329964 kB' 'Active(anon): 5528204 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8798988 kB' 'Mapped: 97760 kB' 'AnonPages: 321244 kB' 'Shmem: 5210136 kB' 'KernelStack: 8360 kB' 'PageTables: 5432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118804 kB' 'Slab: 298480 kB' 'SReclaimable: 118804 kB' 'SUnreclaim: 179676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.855 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.856 node0=1024 expecting 1024 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.856 00:05:10.856 real 0m3.488s 00:05:10.856 user 0m1.391s 00:05:10.856 sys 0m2.054s 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.856 11:13:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 ************************************ 00:05:10.856 END TEST no_shrink_alloc 00:05:10.856 ************************************ 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:10.856 11:13:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:10.856 00:05:10.856 real 0m14.541s 00:05:10.856 user 0m5.655s 00:05:10.856 sys 0m7.932s 00:05:10.856 11:13:06 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.856 11:13:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 ************************************ 00:05:10.856 END TEST hugepages 00:05:10.856 ************************************ 00:05:10.856 11:13:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:10.856 11:13:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.856 11:13:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.856 11:13:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 ************************************ 00:05:10.856 START TEST driver 00:05:10.856 ************************************ 00:05:10.856 11:13:06 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:11.115 * Looking for test storage... 00:05:11.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:11.115 11:13:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:11.115 11:13:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.115 11:13:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.404 11:13:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:14.404 11:13:09 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.404 11:13:09 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.404 11:13:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.404 ************************************ 00:05:14.404 START TEST guess_driver 00:05:14.404 ************************************ 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:05:14.404 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:14.405 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:14.405 Looking for driver=vfio-pci 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.405 11:13:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.340 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.341 11:13:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.600 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.600 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.600 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.600 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:15.600 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:15.600 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.537 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.537 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:16.537 11:13:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.537 11:13:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:16.537 11:13:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:16.537 11:13:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.537 11:13:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.830 00:05:19.830 real 0m5.518s 00:05:19.830 user 0m1.341s 00:05:19.830 sys 0m2.370s 00:05:19.830 11:13:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.830 11:13:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.830 ************************************ 00:05:19.830 END TEST guess_driver 00:05:19.830 ************************************ 00:05:19.830 00:05:19.830 real 0m8.482s 00:05:19.830 user 0m2.042s 00:05:19.830 sys 0m3.688s 00:05:19.830 11:13:14 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.830 11:13:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:19.830 ************************************ 00:05:19.830 END TEST driver 00:05:19.830 ************************************ 00:05:19.830 11:13:14 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:19.830 11:13:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.830 11:13:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.830 11:13:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.830 ************************************ 00:05:19.830 START TEST devices 00:05:19.830 ************************************ 00:05:19.830 11:13:14 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:19.830 * Looking for test storage... 00:05:19.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:19.830 11:13:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:19.830 11:13:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:19.830 11:13:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.830 11:13:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:21.751 11:13:16 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:21.751 No valid GPT data, bailing 00:05:21.751 11:13:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.751 11:13:16 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:21.751 11:13:16 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:21.751 11:13:16 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.751 11:13:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.751 ************************************ 00:05:21.751 START TEST nvme_mount 00:05:21.752 ************************************ 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.752 11:13:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:22.690 Creating new GPT entries in memory. 00:05:22.690 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.690 other utilities. 00:05:22.690 11:13:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.690 11:13:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.690 11:13:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.690 11:13:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.690 11:13:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:23.627 Creating new GPT entries in memory. 00:05:23.627 The operation has completed successfully. 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1981655 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.627 11:13:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:25.003 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:25.263 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.263 11:13:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.522 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:25.522 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:25.522 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:25.522 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.522 11:13:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:26.900 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.159 11:13:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:28.536 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.795 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.795 00:05:28.795 real 0m7.222s 00:05:28.795 user 0m1.777s 00:05:28.795 sys 0m3.052s 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.795 11:13:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:28.795 ************************************ 00:05:28.795 END TEST nvme_mount 00:05:28.795 ************************************ 00:05:28.795 11:13:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:28.795 11:13:24 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.795 11:13:24 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.795 11:13:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:28.795 ************************************ 00:05:28.795 START TEST dm_mount 00:05:28.795 ************************************ 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:28.795 11:13:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:29.732 Creating new GPT entries in memory. 00:05:29.732 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:29.732 other utilities. 00:05:29.732 11:13:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:29.732 11:13:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.732 11:13:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.732 11:13:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.732 11:13:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:30.668 Creating new GPT entries in memory. 00:05:30.668 The operation has completed successfully. 00:05:30.668 11:13:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:30.668 11:13:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.668 11:13:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.668 11:13:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.668 11:13:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:32.044 The operation has completed successfully. 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1984072 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.044 11:13:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.421 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.421 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.421 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.421 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:33.422 11:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.422 11:13:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:34.800 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:35.058 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:35.058 00:05:35.058 real 0m6.403s 00:05:35.058 user 0m1.174s 00:05:35.058 sys 0m2.121s 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.058 11:13:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.058 ************************************ 00:05:35.058 END TEST dm_mount 00:05:35.058 ************************************ 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.317 11:13:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.576 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:35.576 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:35.576 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.576 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.576 11:13:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:35.576 00:05:35.576 real 0m16.010s 00:05:35.576 user 0m3.753s 00:05:35.576 sys 0m6.542s 00:05:35.576 11:13:31 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.576 11:13:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.576 ************************************ 00:05:35.576 END TEST devices 00:05:35.576 ************************************ 00:05:35.576 00:05:35.576 real 0m52.388s 00:05:35.576 user 0m15.755s 00:05:35.576 sys 0m25.436s 00:05:35.576 11:13:31 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.576 11:13:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:35.576 ************************************ 00:05:35.576 END TEST setup.sh 00:05:35.576 ************************************ 00:05:35.576 11:13:31 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.951 Hugepages 00:05:36.951 node hugesize free / total 00:05:36.951 node0 1048576kB 0 / 0 00:05:36.951 node0 2048kB 2048 / 2048 00:05:36.951 node1 1048576kB 0 / 0 00:05:37.210 node1 2048kB 0 / 0 00:05:37.210 00:05:37.210 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.210 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:37.210 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:37.210 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:37.210 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:37.210 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:37.210 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:37.210 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:37.211 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:37.211 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:37.211 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:37.211 11:13:32 -- spdk/autotest.sh@130 -- # uname -s 00:05:37.211 11:13:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:37.211 11:13:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:37.211 11:13:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:39.120 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:39.120 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:39.120 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:39.686 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:39.944 11:13:35 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:40.878 11:13:36 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:40.878 11:13:36 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:40.878 11:13:36 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.878 11:13:36 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:40.878 11:13:36 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:40.878 11:13:36 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:40.878 11:13:36 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.878 11:13:36 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.878 11:13:36 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:41.137 11:13:36 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:41.137 11:13:36 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:41.137 11:13:36 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:42.511 Waiting for block devices as requested 00:05:42.511 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:42.511 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:42.511 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:42.770 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:42.770 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:42.770 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:42.770 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:43.029 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:43.029 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:43.029 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:43.029 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:43.288 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:43.288 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:43.288 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:43.288 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:43.547 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:43.547 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:43.547 11:13:39 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:43.547 11:13:39 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:05:43.547 11:13:39 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:43.547 11:13:39 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:43.547 11:13:39 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:43.547 11:13:39 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:43.547 11:13:39 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:43.547 11:13:39 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:43.547 11:13:39 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:43.547 11:13:39 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:43.547 11:13:39 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:43.547 11:13:39 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:43.547 11:13:39 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:43.547 11:13:39 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:43.547 11:13:39 -- common/autotest_common.sh@1557 -- # continue 00:05:43.547 11:13:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:43.547 11:13:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.547 11:13:39 -- common/autotest_common.sh@10 -- # set +x 00:05:43.806 11:13:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:43.806 11:13:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.806 11:13:39 -- common/autotest_common.sh@10 -- # set +x 00:05:43.806 11:13:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:45.183 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.183 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:45.183 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:45.183 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:45.441 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:45.441 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:45.441 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:45.441 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:45.441 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:45.441 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:46.378 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:46.378 11:13:41 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:46.378 11:13:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.378 11:13:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.378 11:13:41 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:46.378 11:13:41 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:46.378 11:13:41 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:46.378 11:13:41 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:46.378 11:13:41 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:46.378 11:13:41 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:46.378 11:13:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:46.378 11:13:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:46.378 11:13:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.378 11:13:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:46.378 11:13:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:46.378 11:13:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:46.378 11:13:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:46.637 11:13:42 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:46.637 11:13:42 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:46.637 11:13:42 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:46.637 11:13:42 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:46.637 11:13:42 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:46.637 11:13:42 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:05:46.637 11:13:42 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:05:46.637 11:13:42 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1989561 00:05:46.637 11:13:42 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.637 11:13:42 -- common/autotest_common.sh@1598 -- # waitforlisten 1989561 00:05:46.637 11:13:42 -- common/autotest_common.sh@831 -- # '[' -z 1989561 ']' 00:05:46.637 11:13:42 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.637 11:13:42 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.637 11:13:42 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.637 11:13:42 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.637 11:13:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.637 [2024-07-26 11:13:42.126688] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:05:46.637 [2024-07-26 11:13:42.126830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989561 ] 00:05:46.637 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.637 [2024-07-26 11:13:42.209366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.895 [2024-07-26 11:13:42.335122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.153 11:13:42 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.153 11:13:42 -- common/autotest_common.sh@864 -- # return 0 00:05:47.153 11:13:42 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:47.153 11:13:42 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:47.153 11:13:42 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:50.436 nvme0n1 00:05:50.436 11:13:45 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:50.436 [2024-07-26 11:13:46.026987] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:50.436 [2024-07-26 11:13:46.027038] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:50.436 request: 00:05:50.436 { 00:05:50.436 "nvme_ctrlr_name": "nvme0", 00:05:50.436 "password": "test", 00:05:50.436 "method": "bdev_nvme_opal_revert", 00:05:50.436 "req_id": 1 00:05:50.436 } 00:05:50.436 Got JSON-RPC error response 00:05:50.436 response: 00:05:50.436 { 00:05:50.436 "code": -32603, 00:05:50.436 "message": "Internal error" 00:05:50.436 } 00:05:50.436 11:13:46 -- common/autotest_common.sh@1604 -- # true 00:05:50.436 11:13:46 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:50.436 11:13:46 -- common/autotest_common.sh@1608 -- # killprocess 1989561 00:05:50.436 11:13:46 -- common/autotest_common.sh@950 -- # '[' -z 1989561 ']' 00:05:50.436 11:13:46 -- common/autotest_common.sh@954 -- # kill -0 1989561 00:05:50.436 11:13:46 -- common/autotest_common.sh@955 -- # uname 00:05:50.436 11:13:46 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.436 11:13:46 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1989561 00:05:50.436 11:13:46 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.436 11:13:46 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.436 11:13:46 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1989561' 00:05:50.436 killing process with pid 1989561 00:05:50.436 11:13:46 -- common/autotest_common.sh@969 -- # kill 1989561 00:05:50.436 11:13:46 -- common/autotest_common.sh@974 -- # wait 1989561 00:05:52.335 11:13:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:52.335 11:13:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:52.335 11:13:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:52.335 11:13:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:52.335 11:13:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:52.335 11:13:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.335 11:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.335 11:13:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:52.335 11:13:47 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:52.335 11:13:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.335 11:13:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.335 11:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.335 ************************************ 00:05:52.335 START TEST env 00:05:52.335 ************************************ 00:05:52.335 11:13:47 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:52.335 * Looking for test storage... 00:05:52.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:52.335 11:13:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:52.335 11:13:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.335 11:13:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.335 11:13:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.594 ************************************ 00:05:52.594 START TEST env_memory 00:05:52.594 ************************************ 00:05:52.594 11:13:48 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:52.594 00:05:52.594 00:05:52.594 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.594 http://cunit.sourceforge.net/ 00:05:52.594 00:05:52.594 00:05:52.594 Suite: memory 00:05:52.594 Test: alloc and free memory map ...[2024-07-26 11:13:48.068426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:52.594 passed 00:05:52.594 Test: mem map translation ...[2024-07-26 11:13:48.099963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:52.594 [2024-07-26 11:13:48.099999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:52.594 [2024-07-26 11:13:48.100067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:52.594 [2024-07-26 11:13:48.100086] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:52.594 passed 00:05:52.594 Test: mem map registration ...[2024-07-26 11:13:48.166436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:52.594 [2024-07-26 11:13:48.166469] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:52.594 passed 00:05:52.853 Test: mem map adjacent registrations ...passed 00:05:52.853 00:05:52.853 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.853 suites 1 1 n/a 0 0 00:05:52.853 tests 4 4 4 0 0 00:05:52.853 asserts 152 152 152 0 n/a 00:05:52.853 00:05:52.853 Elapsed time = 0.226 seconds 00:05:52.853 00:05:52.853 real 0m0.236s 00:05:52.853 user 0m0.225s 00:05:52.853 sys 0m0.010s 00:05:52.853 11:13:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.853 11:13:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:52.853 ************************************ 00:05:52.853 END TEST env_memory 00:05:52.853 ************************************ 00:05:52.853 11:13:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:52.853 11:13:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.853 11:13:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.853 11:13:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.853 ************************************ 00:05:52.853 START TEST env_vtophys 00:05:52.853 ************************************ 00:05:52.853 11:13:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:52.853 EAL: lib.eal log level changed from notice to debug 00:05:52.853 EAL: Detected lcore 0 as core 0 on socket 0 00:05:52.853 EAL: Detected lcore 1 as core 1 on socket 0 00:05:52.853 EAL: Detected lcore 2 as core 2 on socket 0 00:05:52.853 EAL: Detected lcore 3 as core 3 on socket 0 00:05:52.853 EAL: Detected lcore 4 as core 4 on socket 0 00:05:52.853 EAL: Detected lcore 5 as core 5 on socket 0 00:05:52.853 EAL: Detected lcore 6 as core 8 on socket 0 00:05:52.853 EAL: Detected lcore 7 as core 9 on socket 0 00:05:52.853 EAL: Detected lcore 8 as core 10 on socket 0 00:05:52.853 EAL: Detected lcore 9 as core 11 on socket 0 00:05:52.853 EAL: Detected lcore 10 as core 12 on socket 0 00:05:52.853 EAL: Detected lcore 11 as core 13 on socket 0 00:05:52.853 EAL: Detected lcore 12 as core 0 on socket 1 00:05:52.853 EAL: Detected lcore 13 as core 1 on socket 1 00:05:52.853 EAL: Detected lcore 14 as core 2 on socket 1 00:05:52.853 EAL: Detected lcore 15 as core 3 on socket 1 00:05:52.853 EAL: Detected lcore 16 as core 4 on socket 1 00:05:52.853 EAL: Detected lcore 17 as core 5 on socket 1 00:05:52.853 EAL: Detected lcore 18 as core 8 on socket 1 00:05:52.853 EAL: Detected lcore 19 as core 9 on socket 1 00:05:52.853 EAL: Detected lcore 20 as core 10 on socket 1 00:05:52.853 EAL: Detected lcore 21 as core 11 on socket 1 00:05:52.853 EAL: Detected lcore 22 as core 12 on socket 1 00:05:52.853 EAL: Detected lcore 23 as core 13 on socket 1 00:05:52.853 EAL: Detected lcore 24 as core 0 on socket 0 00:05:52.853 EAL: Detected lcore 25 as core 1 on socket 0 00:05:52.853 EAL: Detected lcore 26 as core 2 on socket 0 00:05:52.853 EAL: Detected lcore 27 as core 3 on socket 0 00:05:52.853 EAL: Detected lcore 28 as core 4 on socket 0 00:05:52.853 EAL: Detected lcore 29 as core 5 on socket 0 00:05:52.853 EAL: Detected lcore 30 as core 8 on socket 0 00:05:52.853 EAL: Detected lcore 31 as core 9 on socket 0 00:05:52.853 EAL: Detected lcore 32 as core 10 on socket 0 00:05:52.853 EAL: Detected lcore 33 as core 11 on socket 0 00:05:52.853 EAL: Detected lcore 34 as core 12 on socket 0 00:05:52.853 EAL: Detected lcore 35 as core 13 on socket 0 00:05:52.853 EAL: Detected lcore 36 as core 0 on socket 1 00:05:52.853 EAL: Detected lcore 37 as core 1 on socket 1 00:05:52.853 EAL: Detected lcore 38 as core 2 on socket 1 00:05:52.853 EAL: Detected lcore 39 as core 3 on socket 1 00:05:52.853 EAL: Detected lcore 40 as core 4 on socket 1 00:05:52.853 EAL: Detected lcore 41 as core 5 on socket 1 00:05:52.853 EAL: Detected lcore 42 as core 8 on socket 1 00:05:52.853 EAL: Detected lcore 43 as core 9 on socket 1 00:05:52.853 EAL: Detected lcore 44 as core 10 on socket 1 00:05:52.853 EAL: Detected lcore 45 as core 11 on socket 1 00:05:52.853 EAL: Detected lcore 46 as core 12 on socket 1 00:05:52.853 EAL: Detected lcore 47 as core 13 on socket 1 00:05:52.853 EAL: Maximum logical cores by configuration: 128 00:05:52.853 EAL: Detected CPU lcores: 48 00:05:52.853 EAL: Detected NUMA nodes: 2 00:05:52.853 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:52.853 EAL: Detected shared linkage of DPDK 00:05:52.853 EAL: No shared files mode enabled, IPC will be disabled 00:05:52.853 EAL: Bus pci wants IOVA as 'DC' 00:05:52.853 EAL: Buses did not request a specific IOVA mode. 00:05:52.853 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:52.853 EAL: Selected IOVA mode 'VA' 00:05:52.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.853 EAL: Probing VFIO support... 00:05:52.853 EAL: IOMMU type 1 (Type 1) is supported 00:05:52.853 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:52.853 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:52.853 EAL: VFIO support initialized 00:05:52.853 EAL: Ask a virtual area of 0x2e000 bytes 00:05:52.854 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:52.854 EAL: Setting up physically contiguous memory... 00:05:52.854 EAL: Setting maximum number of open files to 524288 00:05:52.854 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:52.854 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:52.854 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:52.854 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:52.854 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.854 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:52.854 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.854 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.854 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:52.854 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:52.854 EAL: Hugepages will be freed exactly as allocated. 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: TSC frequency is ~2700000 KHz 00:05:52.854 EAL: Main lcore 0 is ready (tid=7f6abd4c9a00;cpuset=[0]) 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 0 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 2MB 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:52.854 EAL: Mem event callback 'spdk:(nil)' registered 00:05:52.854 00:05:52.854 00:05:52.854 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.854 http://cunit.sourceforge.net/ 00:05:52.854 00:05:52.854 00:05:52.854 Suite: components_suite 00:05:52.854 Test: vtophys_malloc_test ...passed 00:05:52.854 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 4 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 4MB 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was shrunk by 4MB 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 4 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 6MB 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was shrunk by 6MB 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 4 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 10MB 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was shrunk by 10MB 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 4 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 18MB 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was shrunk by 18MB 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 4 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 34MB 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was shrunk by 34MB 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.854 EAL: Restoring previous memory policy: 4 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was expanded by 66MB 00:05:52.854 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.854 EAL: request: mp_malloc_sync 00:05:52.854 EAL: No shared files mode enabled, IPC is disabled 00:05:52.854 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.854 EAL: Trying to obtain current memory policy. 00:05:52.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.113 EAL: Restoring previous memory policy: 4 00:05:53.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.113 EAL: request: mp_malloc_sync 00:05:53.113 EAL: No shared files mode enabled, IPC is disabled 00:05:53.113 EAL: Heap on socket 0 was expanded by 130MB 00:05:53.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.113 EAL: request: mp_malloc_sync 00:05:53.113 EAL: No shared files mode enabled, IPC is disabled 00:05:53.113 EAL: Heap on socket 0 was shrunk by 130MB 00:05:53.113 EAL: Trying to obtain current memory policy. 00:05:53.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.113 EAL: Restoring previous memory policy: 4 00:05:53.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.113 EAL: request: mp_malloc_sync 00:05:53.113 EAL: No shared files mode enabled, IPC is disabled 00:05:53.113 EAL: Heap on socket 0 was expanded by 258MB 00:05:53.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.113 EAL: request: mp_malloc_sync 00:05:53.113 EAL: No shared files mode enabled, IPC is disabled 00:05:53.113 EAL: Heap on socket 0 was shrunk by 258MB 00:05:53.113 EAL: Trying to obtain current memory policy. 00:05:53.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.371 EAL: Restoring previous memory policy: 4 00:05:53.371 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.371 EAL: request: mp_malloc_sync 00:05:53.371 EAL: No shared files mode enabled, IPC is disabled 00:05:53.371 EAL: Heap on socket 0 was expanded by 514MB 00:05:53.371 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.654 EAL: request: mp_malloc_sync 00:05:53.654 EAL: No shared files mode enabled, IPC is disabled 00:05:53.654 EAL: Heap on socket 0 was shrunk by 514MB 00:05:53.654 EAL: Trying to obtain current memory policy. 00:05:53.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.912 EAL: Restoring previous memory policy: 4 00:05:53.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.912 EAL: request: mp_malloc_sync 00:05:53.912 EAL: No shared files mode enabled, IPC is disabled 00:05:53.912 EAL: Heap on socket 0 was expanded by 1026MB 00:05:54.170 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.429 EAL: request: mp_malloc_sync 00:05:54.429 EAL: No shared files mode enabled, IPC is disabled 00:05:54.429 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:54.429 passed 00:05:54.429 00:05:54.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.429 suites 1 1 n/a 0 0 00:05:54.429 tests 2 2 2 0 0 00:05:54.429 asserts 497 497 497 0 n/a 00:05:54.429 00:05:54.429 Elapsed time = 1.451 seconds 00:05:54.429 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.429 EAL: request: mp_malloc_sync 00:05:54.429 EAL: No shared files mode enabled, IPC is disabled 00:05:54.429 EAL: Heap on socket 0 was shrunk by 2MB 00:05:54.429 EAL: No shared files mode enabled, IPC is disabled 00:05:54.429 EAL: No shared files mode enabled, IPC is disabled 00:05:54.429 EAL: No shared files mode enabled, IPC is disabled 00:05:54.429 00:05:54.429 real 0m1.571s 00:05:54.429 user 0m0.903s 00:05:54.429 sys 0m0.635s 00:05:54.429 11:13:49 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.429 11:13:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:54.429 ************************************ 00:05:54.429 END TEST env_vtophys 00:05:54.429 ************************************ 00:05:54.429 11:13:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:54.429 11:13:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.429 11:13:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.429 11:13:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.429 ************************************ 00:05:54.429 START TEST env_pci 00:05:54.429 ************************************ 00:05:54.429 11:13:49 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:54.429 00:05:54.429 00:05:54.429 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.429 http://cunit.sourceforge.net/ 00:05:54.429 00:05:54.429 00:05:54.429 Suite: pci 00:05:54.429 Test: pci_hook ...[2024-07-26 11:13:49.974867] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1990576 has claimed it 00:05:54.429 EAL: Cannot find device (10000:00:01.0) 00:05:54.429 EAL: Failed to attach device on primary process 00:05:54.429 passed 00:05:54.429 00:05:54.429 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.429 suites 1 1 n/a 0 0 00:05:54.429 tests 1 1 1 0 0 00:05:54.429 asserts 25 25 25 0 n/a 00:05:54.429 00:05:54.429 Elapsed time = 0.026 seconds 00:05:54.429 00:05:54.429 real 0m0.044s 00:05:54.430 user 0m0.012s 00:05:54.430 sys 0m0.032s 00:05:54.430 11:13:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.430 11:13:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:54.430 ************************************ 00:05:54.430 END TEST env_pci 00:05:54.430 ************************************ 00:05:54.430 11:13:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:54.430 11:13:50 env -- env/env.sh@15 -- # uname 00:05:54.430 11:13:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:54.430 11:13:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:54.430 11:13:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:54.430 11:13:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:54.430 11:13:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.430 11:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.430 ************************************ 00:05:54.430 START TEST env_dpdk_post_init 00:05:54.430 ************************************ 00:05:54.430 11:13:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:54.430 EAL: Detected CPU lcores: 48 00:05:54.430 EAL: Detected NUMA nodes: 2 00:05:54.430 EAL: Detected shared linkage of DPDK 00:05:54.430 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:54.688 EAL: Selected IOVA mode 'VA' 00:05:54.688 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.688 EAL: VFIO support initialized 00:05:54.688 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:54.688 EAL: Using IOMMU type 1 (Type 1) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:54.688 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:54.947 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:54.947 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:55.514 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:58.806 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:58.806 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:59.065 Starting DPDK initialization... 00:05:59.065 Starting SPDK post initialization... 00:05:59.065 SPDK NVMe probe 00:05:59.065 Attaching to 0000:82:00.0 00:05:59.065 Attached to 0000:82:00.0 00:05:59.065 Cleaning up... 00:05:59.065 00:05:59.065 real 0m4.419s 00:05:59.065 user 0m3.268s 00:05:59.065 sys 0m0.209s 00:05:59.065 11:13:54 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.065 11:13:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.065 ************************************ 00:05:59.065 END TEST env_dpdk_post_init 00:05:59.065 ************************************ 00:05:59.065 11:13:54 env -- env/env.sh@26 -- # uname 00:05:59.065 11:13:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:59.065 11:13:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.065 11:13:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.065 11:13:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.065 11:13:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.065 ************************************ 00:05:59.065 START TEST env_mem_callbacks 00:05:59.065 ************************************ 00:05:59.065 11:13:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.065 EAL: Detected CPU lcores: 48 00:05:59.065 EAL: Detected NUMA nodes: 2 00:05:59.065 EAL: Detected shared linkage of DPDK 00:05:59.065 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.065 EAL: Selected IOVA mode 'VA' 00:05:59.065 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.065 EAL: VFIO support initialized 00:05:59.065 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.065 00:05:59.065 00:05:59.065 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.065 http://cunit.sourceforge.net/ 00:05:59.065 00:05:59.065 00:05:59.065 Suite: memory 00:05:59.065 Test: test ... 00:05:59.065 register 0x200000200000 2097152 00:05:59.065 malloc 3145728 00:05:59.065 register 0x200000400000 4194304 00:05:59.065 buf 0x200000500000 len 3145728 PASSED 00:05:59.065 malloc 64 00:05:59.065 buf 0x2000004fff40 len 64 PASSED 00:05:59.065 malloc 4194304 00:05:59.065 register 0x200000800000 6291456 00:05:59.065 buf 0x200000a00000 len 4194304 PASSED 00:05:59.065 free 0x200000500000 3145728 00:05:59.065 free 0x2000004fff40 64 00:05:59.065 unregister 0x200000400000 4194304 PASSED 00:05:59.065 free 0x200000a00000 4194304 00:05:59.065 unregister 0x200000800000 6291456 PASSED 00:05:59.065 malloc 8388608 00:05:59.065 register 0x200000400000 10485760 00:05:59.065 buf 0x200000600000 len 8388608 PASSED 00:05:59.065 free 0x200000600000 8388608 00:05:59.065 unregister 0x200000400000 10485760 PASSED 00:05:59.065 passed 00:05:59.065 00:05:59.065 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.065 suites 1 1 n/a 0 0 00:05:59.065 tests 1 1 1 0 0 00:05:59.065 asserts 15 15 15 0 n/a 00:05:59.065 00:05:59.065 Elapsed time = 0.006 seconds 00:05:59.065 00:05:59.065 real 0m0.092s 00:05:59.065 user 0m0.023s 00:05:59.065 sys 0m0.067s 00:05:59.065 11:13:54 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.065 11:13:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:59.065 ************************************ 00:05:59.065 END TEST env_mem_callbacks 00:05:59.065 ************************************ 00:05:59.065 00:05:59.065 real 0m6.735s 00:05:59.065 user 0m4.579s 00:05:59.065 sys 0m1.201s 00:05:59.065 11:13:54 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.065 11:13:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.065 ************************************ 00:05:59.065 END TEST env 00:05:59.065 ************************************ 00:05:59.065 11:13:54 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:59.065 11:13:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.065 11:13:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.065 11:13:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.065 ************************************ 00:05:59.065 START TEST rpc 00:05:59.065 ************************************ 00:05:59.065 11:13:54 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:59.324 * Looking for test storage... 00:05:59.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:59.324 11:13:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1991227 00:05:59.324 11:13:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:59.324 11:13:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.324 11:13:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1991227 00:05:59.324 11:13:54 rpc -- common/autotest_common.sh@831 -- # '[' -z 1991227 ']' 00:05:59.324 11:13:54 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.324 11:13:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.324 11:13:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.324 11:13:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.324 11:13:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.324 [2024-07-26 11:13:54.884800] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:05:59.324 [2024-07-26 11:13:54.884975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991227 ] 00:05:59.324 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.324 [2024-07-26 11:13:54.975699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.582 [2024-07-26 11:13:55.097773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:59.582 [2024-07-26 11:13:55.097843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1991227' to capture a snapshot of events at runtime. 00:05:59.582 [2024-07-26 11:13:55.097859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.582 [2024-07-26 11:13:55.097873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.582 [2024-07-26 11:13:55.097884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1991227 for offline analysis/debug. 00:05:59.582 [2024-07-26 11:13:55.097916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.840 11:13:55 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.840 11:13:55 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.840 11:13:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:59.840 11:13:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:59.840 11:13:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:59.840 11:13:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:59.840 11:13:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.840 11:13:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.840 11:13:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.840 ************************************ 00:05:59.840 START TEST rpc_integrity 00:05:59.840 ************************************ 00:05:59.840 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:59.840 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:59.841 { 00:05:59.841 "name": "Malloc0", 00:05:59.841 "aliases": [ 00:05:59.841 "f53f61b0-68a8-4aab-855d-610962c8fd06" 00:05:59.841 ], 00:05:59.841 "product_name": "Malloc disk", 00:05:59.841 "block_size": 512, 00:05:59.841 "num_blocks": 16384, 00:05:59.841 "uuid": "f53f61b0-68a8-4aab-855d-610962c8fd06", 00:05:59.841 "assigned_rate_limits": { 00:05:59.841 "rw_ios_per_sec": 0, 00:05:59.841 "rw_mbytes_per_sec": 0, 00:05:59.841 "r_mbytes_per_sec": 0, 00:05:59.841 "w_mbytes_per_sec": 0 00:05:59.841 }, 00:05:59.841 "claimed": false, 00:05:59.841 "zoned": false, 00:05:59.841 "supported_io_types": { 00:05:59.841 "read": true, 00:05:59.841 "write": true, 00:05:59.841 "unmap": true, 00:05:59.841 "flush": true, 00:05:59.841 "reset": true, 00:05:59.841 "nvme_admin": false, 00:05:59.841 "nvme_io": false, 00:05:59.841 "nvme_io_md": false, 00:05:59.841 "write_zeroes": true, 00:05:59.841 "zcopy": true, 00:05:59.841 "get_zone_info": false, 00:05:59.841 "zone_management": false, 00:05:59.841 "zone_append": false, 00:05:59.841 "compare": false, 00:05:59.841 "compare_and_write": false, 00:05:59.841 "abort": true, 00:05:59.841 "seek_hole": false, 00:05:59.841 "seek_data": false, 00:05:59.841 "copy": true, 00:05:59.841 "nvme_iov_md": false 00:05:59.841 }, 00:05:59.841 "memory_domains": [ 00:05:59.841 { 00:05:59.841 "dma_device_id": "system", 00:05:59.841 "dma_device_type": 1 00:05:59.841 }, 00:05:59.841 { 00:05:59.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.841 "dma_device_type": 2 00:05:59.841 } 00:05:59.841 ], 00:05:59.841 "driver_specific": {} 00:05:59.841 } 00:05:59.841 ]' 00:05:59.841 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.099 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.099 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:00.099 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.099 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.099 [2024-07-26 11:13:55.512977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:00.099 [2024-07-26 11:13:55.513025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.099 [2024-07-26 11:13:55.513049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20063e0 00:06:00.099 [2024-07-26 11:13:55.513064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.099 [2024-07-26 11:13:55.514526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.099 [2024-07-26 11:13:55.514554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.099 Passthru0 00:06:00.099 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.099 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.099 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.099 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.100 { 00:06:00.100 "name": "Malloc0", 00:06:00.100 "aliases": [ 00:06:00.100 "f53f61b0-68a8-4aab-855d-610962c8fd06" 00:06:00.100 ], 00:06:00.100 "product_name": "Malloc disk", 00:06:00.100 "block_size": 512, 00:06:00.100 "num_blocks": 16384, 00:06:00.100 "uuid": "f53f61b0-68a8-4aab-855d-610962c8fd06", 00:06:00.100 "assigned_rate_limits": { 00:06:00.100 "rw_ios_per_sec": 0, 00:06:00.100 "rw_mbytes_per_sec": 0, 00:06:00.100 "r_mbytes_per_sec": 0, 00:06:00.100 "w_mbytes_per_sec": 0 00:06:00.100 }, 00:06:00.100 "claimed": true, 00:06:00.100 "claim_type": "exclusive_write", 00:06:00.100 "zoned": false, 00:06:00.100 "supported_io_types": { 00:06:00.100 "read": true, 00:06:00.100 "write": true, 00:06:00.100 "unmap": true, 00:06:00.100 "flush": true, 00:06:00.100 "reset": true, 00:06:00.100 "nvme_admin": false, 00:06:00.100 "nvme_io": false, 00:06:00.100 "nvme_io_md": false, 00:06:00.100 "write_zeroes": true, 00:06:00.100 "zcopy": true, 00:06:00.100 "get_zone_info": false, 00:06:00.100 "zone_management": false, 00:06:00.100 "zone_append": false, 00:06:00.100 "compare": false, 00:06:00.100 "compare_and_write": false, 00:06:00.100 "abort": true, 00:06:00.100 "seek_hole": false, 00:06:00.100 "seek_data": false, 00:06:00.100 "copy": true, 00:06:00.100 "nvme_iov_md": false 00:06:00.100 }, 00:06:00.100 "memory_domains": [ 00:06:00.100 { 00:06:00.100 "dma_device_id": "system", 00:06:00.100 "dma_device_type": 1 00:06:00.100 }, 00:06:00.100 { 00:06:00.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.100 "dma_device_type": 2 00:06:00.100 } 00:06:00.100 ], 00:06:00.100 "driver_specific": {} 00:06:00.100 }, 00:06:00.100 { 00:06:00.100 "name": "Passthru0", 00:06:00.100 "aliases": [ 00:06:00.100 "08eeec19-b9c9-59f3-9889-a3df8fe8c8ab" 00:06:00.100 ], 00:06:00.100 "product_name": "passthru", 00:06:00.100 "block_size": 512, 00:06:00.100 "num_blocks": 16384, 00:06:00.100 "uuid": "08eeec19-b9c9-59f3-9889-a3df8fe8c8ab", 00:06:00.100 "assigned_rate_limits": { 00:06:00.100 "rw_ios_per_sec": 0, 00:06:00.100 "rw_mbytes_per_sec": 0, 00:06:00.100 "r_mbytes_per_sec": 0, 00:06:00.100 "w_mbytes_per_sec": 0 00:06:00.100 }, 00:06:00.100 "claimed": false, 00:06:00.100 "zoned": false, 00:06:00.100 "supported_io_types": { 00:06:00.100 "read": true, 00:06:00.100 "write": true, 00:06:00.100 "unmap": true, 00:06:00.100 "flush": true, 00:06:00.100 "reset": true, 00:06:00.100 "nvme_admin": false, 00:06:00.100 "nvme_io": false, 00:06:00.100 "nvme_io_md": false, 00:06:00.100 "write_zeroes": true, 00:06:00.100 "zcopy": true, 00:06:00.100 "get_zone_info": false, 00:06:00.100 "zone_management": false, 00:06:00.100 "zone_append": false, 00:06:00.100 "compare": false, 00:06:00.100 "compare_and_write": false, 00:06:00.100 "abort": true, 00:06:00.100 "seek_hole": false, 00:06:00.100 "seek_data": false, 00:06:00.100 "copy": true, 00:06:00.100 "nvme_iov_md": false 00:06:00.100 }, 00:06:00.100 "memory_domains": [ 00:06:00.100 { 00:06:00.100 "dma_device_id": "system", 00:06:00.100 "dma_device_type": 1 00:06:00.100 }, 00:06:00.100 { 00:06:00.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.100 "dma_device_type": 2 00:06:00.100 } 00:06:00.100 ], 00:06:00.100 "driver_specific": { 00:06:00.100 "passthru": { 00:06:00.100 "name": "Passthru0", 00:06:00.100 "base_bdev_name": "Malloc0" 00:06:00.100 } 00:06:00.100 } 00:06:00.100 } 00:06:00.100 ]' 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.100 11:13:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.100 00:06:00.100 real 0m0.240s 00:06:00.100 user 0m0.162s 00:06:00.100 sys 0m0.021s 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 ************************************ 00:06:00.100 END TEST rpc_integrity 00:06:00.100 ************************************ 00:06:00.100 11:13:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:00.100 11:13:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.100 11:13:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.100 11:13:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 ************************************ 00:06:00.100 START TEST rpc_plugins 00:06:00.100 ************************************ 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:00.100 { 00:06:00.100 "name": "Malloc1", 00:06:00.100 "aliases": [ 00:06:00.100 "6511680a-9d51-45c9-8d12-1c24115db344" 00:06:00.100 ], 00:06:00.100 "product_name": "Malloc disk", 00:06:00.100 "block_size": 4096, 00:06:00.100 "num_blocks": 256, 00:06:00.100 "uuid": "6511680a-9d51-45c9-8d12-1c24115db344", 00:06:00.100 "assigned_rate_limits": { 00:06:00.100 "rw_ios_per_sec": 0, 00:06:00.100 "rw_mbytes_per_sec": 0, 00:06:00.100 "r_mbytes_per_sec": 0, 00:06:00.100 "w_mbytes_per_sec": 0 00:06:00.100 }, 00:06:00.100 "claimed": false, 00:06:00.100 "zoned": false, 00:06:00.100 "supported_io_types": { 00:06:00.100 "read": true, 00:06:00.100 "write": true, 00:06:00.100 "unmap": true, 00:06:00.100 "flush": true, 00:06:00.100 "reset": true, 00:06:00.100 "nvme_admin": false, 00:06:00.100 "nvme_io": false, 00:06:00.100 "nvme_io_md": false, 00:06:00.100 "write_zeroes": true, 00:06:00.100 "zcopy": true, 00:06:00.100 "get_zone_info": false, 00:06:00.100 "zone_management": false, 00:06:00.100 "zone_append": false, 00:06:00.100 "compare": false, 00:06:00.100 "compare_and_write": false, 00:06:00.100 "abort": true, 00:06:00.100 "seek_hole": false, 00:06:00.100 "seek_data": false, 00:06:00.100 "copy": true, 00:06:00.100 "nvme_iov_md": false 00:06:00.100 }, 00:06:00.100 "memory_domains": [ 00:06:00.100 { 00:06:00.100 "dma_device_id": "system", 00:06:00.100 "dma_device_type": 1 00:06:00.100 }, 00:06:00.100 { 00:06:00.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.100 "dma_device_type": 2 00:06:00.100 } 00:06:00.100 ], 00:06:00.100 "driver_specific": {} 00:06:00.100 } 00:06:00.100 ]' 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:00.100 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.100 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.359 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:00.359 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.359 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.359 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:00.359 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:00.359 11:13:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:00.359 00:06:00.359 real 0m0.119s 00:06:00.359 user 0m0.074s 00:06:00.359 sys 0m0.016s 00:06:00.359 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.359 11:13:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 ************************************ 00:06:00.359 END TEST rpc_plugins 00:06:00.359 ************************************ 00:06:00.359 11:13:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:00.359 11:13:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.359 11:13:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.359 11:13:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 ************************************ 00:06:00.359 START TEST rpc_trace_cmd_test 00:06:00.359 ************************************ 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.359 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:00.359 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1991227", 00:06:00.359 "tpoint_group_mask": "0x8", 00:06:00.359 "iscsi_conn": { 00:06:00.359 "mask": "0x2", 00:06:00.359 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "scsi": { 00:06:00.360 "mask": "0x4", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "bdev": { 00:06:00.360 "mask": "0x8", 00:06:00.360 "tpoint_mask": "0xffffffffffffffff" 00:06:00.360 }, 00:06:00.360 "nvmf_rdma": { 00:06:00.360 "mask": "0x10", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "nvmf_tcp": { 00:06:00.360 "mask": "0x20", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "ftl": { 00:06:00.360 "mask": "0x40", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "blobfs": { 00:06:00.360 "mask": "0x80", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "dsa": { 00:06:00.360 "mask": "0x200", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "thread": { 00:06:00.360 "mask": "0x400", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "nvme_pcie": { 00:06:00.360 "mask": "0x800", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "iaa": { 00:06:00.360 "mask": "0x1000", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "nvme_tcp": { 00:06:00.360 "mask": "0x2000", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "bdev_nvme": { 00:06:00.360 "mask": "0x4000", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 }, 00:06:00.360 "sock": { 00:06:00.360 "mask": "0x8000", 00:06:00.360 "tpoint_mask": "0x0" 00:06:00.360 } 00:06:00.360 }' 00:06:00.360 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:00.360 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:00.360 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:00.360 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:00.360 11:13:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:00.360 11:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:00.360 11:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:00.618 11:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:00.618 11:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:00.618 11:13:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:00.618 00:06:00.618 real 0m0.245s 00:06:00.618 user 0m0.216s 00:06:00.618 sys 0m0.020s 00:06:00.618 11:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.618 11:13:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 ************************************ 00:06:00.618 END TEST rpc_trace_cmd_test 00:06:00.618 ************************************ 00:06:00.618 11:13:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:00.618 11:13:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:00.618 11:13:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:00.618 11:13:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.618 11:13:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.618 11:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 ************************************ 00:06:00.618 START TEST rpc_daemon_integrity 00:06:00.618 ************************************ 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.618 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.618 { 00:06:00.618 "name": "Malloc2", 00:06:00.618 "aliases": [ 00:06:00.618 "e72564ae-f473-4188-bcd4-406b9efc7127" 00:06:00.618 ], 00:06:00.618 "product_name": "Malloc disk", 00:06:00.618 "block_size": 512, 00:06:00.618 "num_blocks": 16384, 00:06:00.618 "uuid": "e72564ae-f473-4188-bcd4-406b9efc7127", 00:06:00.618 "assigned_rate_limits": { 00:06:00.618 "rw_ios_per_sec": 0, 00:06:00.618 "rw_mbytes_per_sec": 0, 00:06:00.618 "r_mbytes_per_sec": 0, 00:06:00.618 "w_mbytes_per_sec": 0 00:06:00.618 }, 00:06:00.619 "claimed": false, 00:06:00.619 "zoned": false, 00:06:00.619 "supported_io_types": { 00:06:00.619 "read": true, 00:06:00.619 "write": true, 00:06:00.619 "unmap": true, 00:06:00.619 "flush": true, 00:06:00.619 "reset": true, 00:06:00.619 "nvme_admin": false, 00:06:00.619 "nvme_io": false, 00:06:00.619 "nvme_io_md": false, 00:06:00.619 "write_zeroes": true, 00:06:00.619 "zcopy": true, 00:06:00.619 "get_zone_info": false, 00:06:00.619 "zone_management": false, 00:06:00.619 "zone_append": false, 00:06:00.619 "compare": false, 00:06:00.619 "compare_and_write": false, 00:06:00.619 "abort": true, 00:06:00.619 "seek_hole": false, 00:06:00.619 "seek_data": false, 00:06:00.619 "copy": true, 00:06:00.619 "nvme_iov_md": false 00:06:00.619 }, 00:06:00.619 "memory_domains": [ 00:06:00.619 { 00:06:00.619 "dma_device_id": "system", 00:06:00.619 "dma_device_type": 1 00:06:00.619 }, 00:06:00.619 { 00:06:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.619 "dma_device_type": 2 00:06:00.619 } 00:06:00.619 ], 00:06:00.619 "driver_specific": {} 00:06:00.619 } 00:06:00.619 ]' 00:06:00.619 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.619 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.619 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:00.619 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.619 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.877 [2024-07-26 11:13:56.283140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:00.877 [2024-07-26 11:13:56.283186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.877 [2024-07-26 11:13:56.283218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a42f0 00:06:00.877 [2024-07-26 11:13:56.283236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.877 [2024-07-26 11:13:56.284585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.877 [2024-07-26 11:13:56.284614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.877 Passthru0 00:06:00.877 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.877 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.877 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.877 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.877 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.877 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.877 { 00:06:00.877 "name": "Malloc2", 00:06:00.877 "aliases": [ 00:06:00.877 "e72564ae-f473-4188-bcd4-406b9efc7127" 00:06:00.877 ], 00:06:00.877 "product_name": "Malloc disk", 00:06:00.877 "block_size": 512, 00:06:00.877 "num_blocks": 16384, 00:06:00.877 "uuid": "e72564ae-f473-4188-bcd4-406b9efc7127", 00:06:00.877 "assigned_rate_limits": { 00:06:00.877 "rw_ios_per_sec": 0, 00:06:00.877 "rw_mbytes_per_sec": 0, 00:06:00.877 "r_mbytes_per_sec": 0, 00:06:00.877 "w_mbytes_per_sec": 0 00:06:00.877 }, 00:06:00.877 "claimed": true, 00:06:00.877 "claim_type": "exclusive_write", 00:06:00.877 "zoned": false, 00:06:00.877 "supported_io_types": { 00:06:00.877 "read": true, 00:06:00.877 "write": true, 00:06:00.877 "unmap": true, 00:06:00.877 "flush": true, 00:06:00.877 "reset": true, 00:06:00.877 "nvme_admin": false, 00:06:00.877 "nvme_io": false, 00:06:00.877 "nvme_io_md": false, 00:06:00.877 "write_zeroes": true, 00:06:00.877 "zcopy": true, 00:06:00.877 "get_zone_info": false, 00:06:00.877 "zone_management": false, 00:06:00.877 "zone_append": false, 00:06:00.877 "compare": false, 00:06:00.877 "compare_and_write": false, 00:06:00.877 "abort": true, 00:06:00.877 "seek_hole": false, 00:06:00.877 "seek_data": false, 00:06:00.877 "copy": true, 00:06:00.877 "nvme_iov_md": false 00:06:00.877 }, 00:06:00.877 "memory_domains": [ 00:06:00.877 { 00:06:00.877 "dma_device_id": "system", 00:06:00.877 "dma_device_type": 1 00:06:00.877 }, 00:06:00.877 { 00:06:00.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.877 "dma_device_type": 2 00:06:00.877 } 00:06:00.877 ], 00:06:00.877 "driver_specific": {} 00:06:00.877 }, 00:06:00.877 { 00:06:00.877 "name": "Passthru0", 00:06:00.877 "aliases": [ 00:06:00.877 "79df106e-66e0-5c14-a977-19ca14b668fa" 00:06:00.877 ], 00:06:00.877 "product_name": "passthru", 00:06:00.877 "block_size": 512, 00:06:00.877 "num_blocks": 16384, 00:06:00.877 "uuid": "79df106e-66e0-5c14-a977-19ca14b668fa", 00:06:00.877 "assigned_rate_limits": { 00:06:00.877 "rw_ios_per_sec": 0, 00:06:00.877 "rw_mbytes_per_sec": 0, 00:06:00.877 "r_mbytes_per_sec": 0, 00:06:00.877 "w_mbytes_per_sec": 0 00:06:00.877 }, 00:06:00.877 "claimed": false, 00:06:00.877 "zoned": false, 00:06:00.877 "supported_io_types": { 00:06:00.877 "read": true, 00:06:00.877 "write": true, 00:06:00.877 "unmap": true, 00:06:00.877 "flush": true, 00:06:00.877 "reset": true, 00:06:00.877 "nvme_admin": false, 00:06:00.877 "nvme_io": false, 00:06:00.877 "nvme_io_md": false, 00:06:00.877 "write_zeroes": true, 00:06:00.877 "zcopy": true, 00:06:00.877 "get_zone_info": false, 00:06:00.877 "zone_management": false, 00:06:00.877 "zone_append": false, 00:06:00.877 "compare": false, 00:06:00.877 "compare_and_write": false, 00:06:00.877 "abort": true, 00:06:00.877 "seek_hole": false, 00:06:00.877 "seek_data": false, 00:06:00.877 "copy": true, 00:06:00.877 "nvme_iov_md": false 00:06:00.877 }, 00:06:00.877 "memory_domains": [ 00:06:00.877 { 00:06:00.877 "dma_device_id": "system", 00:06:00.878 "dma_device_type": 1 00:06:00.878 }, 00:06:00.878 { 00:06:00.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.878 "dma_device_type": 2 00:06:00.878 } 00:06:00.878 ], 00:06:00.878 "driver_specific": { 00:06:00.878 "passthru": { 00:06:00.878 "name": "Passthru0", 00:06:00.878 "base_bdev_name": "Malloc2" 00:06:00.878 } 00:06:00.878 } 00:06:00.878 } 00:06:00.878 ]' 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.878 00:06:00.878 real 0m0.235s 00:06:00.878 user 0m0.161s 00:06:00.878 sys 0m0.019s 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.878 11:13:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 ************************************ 00:06:00.878 END TEST rpc_daemon_integrity 00:06:00.878 ************************************ 00:06:00.878 11:13:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:00.878 11:13:56 rpc -- rpc/rpc.sh@84 -- # killprocess 1991227 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@950 -- # '[' -z 1991227 ']' 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@954 -- # kill -0 1991227 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@955 -- # uname 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1991227 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1991227' 00:06:00.878 killing process with pid 1991227 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@969 -- # kill 1991227 00:06:00.878 11:13:56 rpc -- common/autotest_common.sh@974 -- # wait 1991227 00:06:01.445 00:06:01.445 real 0m2.213s 00:06:01.445 user 0m2.813s 00:06:01.445 sys 0m0.685s 00:06:01.445 11:13:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.445 11:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.445 ************************************ 00:06:01.445 END TEST rpc 00:06:01.445 ************************************ 00:06:01.445 11:13:56 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:01.445 11:13:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.445 11:13:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.445 11:13:56 -- common/autotest_common.sh@10 -- # set +x 00:06:01.445 ************************************ 00:06:01.445 START TEST skip_rpc 00:06:01.445 ************************************ 00:06:01.445 11:13:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:01.445 * Looking for test storage... 00:06:01.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.445 11:13:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.445 11:13:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:01.445 11:13:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:01.445 11:13:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.445 11:13:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.445 11:13:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.704 ************************************ 00:06:01.704 START TEST skip_rpc 00:06:01.704 ************************************ 00:06:01.704 11:13:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:01.704 11:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1991672 00:06:01.704 11:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:01.704 11:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.704 11:13:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:01.704 [2024-07-26 11:13:57.165564] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:01.704 [2024-07-26 11:13:57.165652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991672 ] 00:06:01.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.704 [2024-07-26 11:13:57.232900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.704 [2024-07-26 11:13:57.357169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1991672 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1991672 ']' 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1991672 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.966 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1991672 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1991672' 00:06:06.967 killing process with pid 1991672 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1991672 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1991672 00:06:06.967 00:06:06.967 real 0m5.513s 00:06:06.967 user 0m5.176s 00:06:06.967 sys 0m0.344s 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.967 11:14:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.967 ************************************ 00:06:06.967 END TEST skip_rpc 00:06:06.967 ************************************ 00:06:07.225 11:14:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.225 11:14:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.225 11:14:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.225 11:14:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.225 ************************************ 00:06:07.225 START TEST skip_rpc_with_json 00:06:07.225 ************************************ 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1992358 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1992358 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1992358 ']' 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.225 11:14:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.225 [2024-07-26 11:14:02.741034] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:07.225 [2024-07-26 11:14:02.741124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992358 ] 00:06:07.225 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.225 [2024-07-26 11:14:02.809742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.484 [2024-07-26 11:14:02.935242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.742 [2024-07-26 11:14:03.217683] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:07.742 request: 00:06:07.742 { 00:06:07.742 "trtype": "tcp", 00:06:07.742 "method": "nvmf_get_transports", 00:06:07.742 "req_id": 1 00:06:07.742 } 00:06:07.742 Got JSON-RPC error response 00:06:07.742 response: 00:06:07.742 { 00:06:07.742 "code": -19, 00:06:07.742 "message": "No such device" 00:06:07.742 } 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.742 [2024-07-26 11:14:03.225807] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.742 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.743 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:07.743 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.743 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.743 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.743 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:07.743 { 00:06:07.743 "subsystems": [ 00:06:07.743 { 00:06:07.743 "subsystem": "vfio_user_target", 00:06:07.743 "config": null 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "keyring", 00:06:07.743 "config": [] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "iobuf", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "iobuf_set_options", 00:06:07.743 "params": { 00:06:07.743 "small_pool_count": 8192, 00:06:07.743 "large_pool_count": 1024, 00:06:07.743 "small_bufsize": 8192, 00:06:07.743 "large_bufsize": 135168 00:06:07.743 } 00:06:07.743 } 00:06:07.743 ] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "sock", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "sock_set_default_impl", 00:06:07.743 "params": { 00:06:07.743 "impl_name": "posix" 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "sock_impl_set_options", 00:06:07.743 "params": { 00:06:07.743 "impl_name": "ssl", 00:06:07.743 "recv_buf_size": 4096, 00:06:07.743 "send_buf_size": 4096, 00:06:07.743 "enable_recv_pipe": true, 00:06:07.743 "enable_quickack": false, 00:06:07.743 "enable_placement_id": 0, 00:06:07.743 "enable_zerocopy_send_server": true, 00:06:07.743 "enable_zerocopy_send_client": false, 00:06:07.743 "zerocopy_threshold": 0, 00:06:07.743 "tls_version": 0, 00:06:07.743 "enable_ktls": false 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "sock_impl_set_options", 00:06:07.743 "params": { 00:06:07.743 "impl_name": "posix", 00:06:07.743 "recv_buf_size": 2097152, 00:06:07.743 "send_buf_size": 2097152, 00:06:07.743 "enable_recv_pipe": true, 00:06:07.743 "enable_quickack": false, 00:06:07.743 "enable_placement_id": 0, 00:06:07.743 "enable_zerocopy_send_server": true, 00:06:07.743 "enable_zerocopy_send_client": false, 00:06:07.743 "zerocopy_threshold": 0, 00:06:07.743 "tls_version": 0, 00:06:07.743 "enable_ktls": false 00:06:07.743 } 00:06:07.743 } 00:06:07.743 ] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "vmd", 00:06:07.743 "config": [] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "accel", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "accel_set_options", 00:06:07.743 "params": { 00:06:07.743 "small_cache_size": 128, 00:06:07.743 "large_cache_size": 16, 00:06:07.743 "task_count": 2048, 00:06:07.743 "sequence_count": 2048, 00:06:07.743 "buf_count": 2048 00:06:07.743 } 00:06:07.743 } 00:06:07.743 ] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "bdev", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "bdev_set_options", 00:06:07.743 "params": { 00:06:07.743 "bdev_io_pool_size": 65535, 00:06:07.743 "bdev_io_cache_size": 256, 00:06:07.743 "bdev_auto_examine": true, 00:06:07.743 "iobuf_small_cache_size": 128, 00:06:07.743 "iobuf_large_cache_size": 16 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "bdev_raid_set_options", 00:06:07.743 "params": { 00:06:07.743 "process_window_size_kb": 1024, 00:06:07.743 "process_max_bandwidth_mb_sec": 0 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "bdev_iscsi_set_options", 00:06:07.743 "params": { 00:06:07.743 "timeout_sec": 30 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "bdev_nvme_set_options", 00:06:07.743 "params": { 00:06:07.743 "action_on_timeout": "none", 00:06:07.743 "timeout_us": 0, 00:06:07.743 "timeout_admin_us": 0, 00:06:07.743 "keep_alive_timeout_ms": 10000, 00:06:07.743 "arbitration_burst": 0, 00:06:07.743 "low_priority_weight": 0, 00:06:07.743 "medium_priority_weight": 0, 00:06:07.743 "high_priority_weight": 0, 00:06:07.743 "nvme_adminq_poll_period_us": 10000, 00:06:07.743 "nvme_ioq_poll_period_us": 0, 00:06:07.743 "io_queue_requests": 0, 00:06:07.743 "delay_cmd_submit": true, 00:06:07.743 "transport_retry_count": 4, 00:06:07.743 "bdev_retry_count": 3, 00:06:07.743 "transport_ack_timeout": 0, 00:06:07.743 "ctrlr_loss_timeout_sec": 0, 00:06:07.743 "reconnect_delay_sec": 0, 00:06:07.743 "fast_io_fail_timeout_sec": 0, 00:06:07.743 "disable_auto_failback": false, 00:06:07.743 "generate_uuids": false, 00:06:07.743 "transport_tos": 0, 00:06:07.743 "nvme_error_stat": false, 00:06:07.743 "rdma_srq_size": 0, 00:06:07.743 "io_path_stat": false, 00:06:07.743 "allow_accel_sequence": false, 00:06:07.743 "rdma_max_cq_size": 0, 00:06:07.743 "rdma_cm_event_timeout_ms": 0, 00:06:07.743 "dhchap_digests": [ 00:06:07.743 "sha256", 00:06:07.743 "sha384", 00:06:07.743 "sha512" 00:06:07.743 ], 00:06:07.743 "dhchap_dhgroups": [ 00:06:07.743 "null", 00:06:07.743 "ffdhe2048", 00:06:07.743 "ffdhe3072", 00:06:07.743 "ffdhe4096", 00:06:07.743 "ffdhe6144", 00:06:07.743 "ffdhe8192" 00:06:07.743 ] 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "bdev_nvme_set_hotplug", 00:06:07.743 "params": { 00:06:07.743 "period_us": 100000, 00:06:07.743 "enable": false 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "bdev_wait_for_examine" 00:06:07.743 } 00:06:07.743 ] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "scsi", 00:06:07.743 "config": null 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "scheduler", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "framework_set_scheduler", 00:06:07.743 "params": { 00:06:07.743 "name": "static" 00:06:07.743 } 00:06:07.743 } 00:06:07.743 ] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "vhost_scsi", 00:06:07.743 "config": [] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "vhost_blk", 00:06:07.743 "config": [] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "ublk", 00:06:07.743 "config": [] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "nbd", 00:06:07.743 "config": [] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "nvmf", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "nvmf_set_config", 00:06:07.743 "params": { 00:06:07.743 "discovery_filter": "match_any", 00:06:07.743 "admin_cmd_passthru": { 00:06:07.743 "identify_ctrlr": false 00:06:07.743 } 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "nvmf_set_max_subsystems", 00:06:07.743 "params": { 00:06:07.743 "max_subsystems": 1024 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "nvmf_set_crdt", 00:06:07.743 "params": { 00:06:07.743 "crdt1": 0, 00:06:07.743 "crdt2": 0, 00:06:07.743 "crdt3": 0 00:06:07.743 } 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "method": "nvmf_create_transport", 00:06:07.743 "params": { 00:06:07.743 "trtype": "TCP", 00:06:07.743 "max_queue_depth": 128, 00:06:07.743 "max_io_qpairs_per_ctrlr": 127, 00:06:07.743 "in_capsule_data_size": 4096, 00:06:07.743 "max_io_size": 131072, 00:06:07.743 "io_unit_size": 131072, 00:06:07.743 "max_aq_depth": 128, 00:06:07.743 "num_shared_buffers": 511, 00:06:07.743 "buf_cache_size": 4294967295, 00:06:07.743 "dif_insert_or_strip": false, 00:06:07.743 "zcopy": false, 00:06:07.743 "c2h_success": true, 00:06:07.743 "sock_priority": 0, 00:06:07.743 "abort_timeout_sec": 1, 00:06:07.743 "ack_timeout": 0, 00:06:07.743 "data_wr_pool_size": 0 00:06:07.743 } 00:06:07.743 } 00:06:07.743 ] 00:06:07.743 }, 00:06:07.743 { 00:06:07.743 "subsystem": "iscsi", 00:06:07.743 "config": [ 00:06:07.743 { 00:06:07.743 "method": "iscsi_set_options", 00:06:07.743 "params": { 00:06:07.743 "node_base": "iqn.2016-06.io.spdk", 00:06:07.743 "max_sessions": 128, 00:06:07.743 "max_connections_per_session": 2, 00:06:07.743 "max_queue_depth": 64, 00:06:07.743 "default_time2wait": 2, 00:06:07.743 "default_time2retain": 20, 00:06:07.744 "first_burst_length": 8192, 00:06:07.744 "immediate_data": true, 00:06:07.744 "allow_duplicated_isid": false, 00:06:07.744 "error_recovery_level": 0, 00:06:07.744 "nop_timeout": 60, 00:06:07.744 "nop_in_interval": 30, 00:06:07.744 "disable_chap": false, 00:06:07.744 "require_chap": false, 00:06:07.744 "mutual_chap": false, 00:06:07.744 "chap_group": 0, 00:06:07.744 "max_large_datain_per_connection": 64, 00:06:07.744 "max_r2t_per_connection": 4, 00:06:07.744 "pdu_pool_size": 36864, 00:06:07.744 "immediate_data_pool_size": 16384, 00:06:07.744 "data_out_pool_size": 2048 00:06:07.744 } 00:06:07.744 } 00:06:07.744 ] 00:06:07.744 } 00:06:07.744 ] 00:06:07.744 } 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1992358 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1992358 ']' 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1992358 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.744 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1992358 00:06:08.002 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.002 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.002 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1992358' 00:06:08.002 killing process with pid 1992358 00:06:08.002 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1992358 00:06:08.002 11:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1992358 00:06:08.260 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1992498 00:06:08.260 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:08.260 11:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1992498 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1992498 ']' 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1992498 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1992498 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1992498' 00:06:13.524 killing process with pid 1992498 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1992498 00:06:13.524 11:14:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1992498 00:06:13.808 11:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:13.808 11:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:13.808 00:06:13.808 real 0m6.738s 00:06:13.808 user 0m6.314s 00:06:13.808 sys 0m0.770s 00:06:13.808 11:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.808 11:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.808 ************************************ 00:06:13.808 END TEST skip_rpc_with_json 00:06:13.808 ************************************ 00:06:14.068 11:14:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:14.068 11:14:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.068 11:14:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.068 11:14:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.068 ************************************ 00:06:14.068 START TEST skip_rpc_with_delay 00:06:14.068 ************************************ 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.068 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.069 [2024-07-26 11:14:09.594465] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:14.069 [2024-07-26 11:14:09.594630] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.069 00:06:14.069 real 0m0.138s 00:06:14.069 user 0m0.095s 00:06:14.069 sys 0m0.041s 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.069 11:14:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:14.069 ************************************ 00:06:14.069 END TEST skip_rpc_with_delay 00:06:14.069 ************************************ 00:06:14.069 11:14:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:14.069 11:14:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:14.069 11:14:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:14.069 11:14:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.069 11:14:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.069 11:14:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.069 ************************************ 00:06:14.069 START TEST exit_on_failed_rpc_init 00:06:14.069 ************************************ 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1993214 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1993214 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1993214 ']' 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.069 11:14:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.327 [2024-07-26 11:14:09.752138] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:14.327 [2024-07-26 11:14:09.752234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993214 ] 00:06:14.327 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.327 [2024-07-26 11:14:09.841474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.584 [2024-07-26 11:14:10.000857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:14.841 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.841 [2024-07-26 11:14:10.353273] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:14.841 [2024-07-26 11:14:10.353369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993278 ] 00:06:14.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.841 [2024-07-26 11:14:10.420169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.098 [2024-07-26 11:14:10.545490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.098 [2024-07-26 11:14:10.545607] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.098 [2024-07-26 11:14:10.545629] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.098 [2024-07-26 11:14:10.545644] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1993214 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1993214 ']' 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1993214 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1993214 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1993214' 00:06:15.098 killing process with pid 1993214 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1993214 00:06:15.098 11:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1993214 00:06:15.664 00:06:15.664 real 0m1.502s 00:06:15.664 user 0m1.812s 00:06:15.664 sys 0m0.512s 00:06:15.664 11:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.664 11:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.664 ************************************ 00:06:15.664 END TEST exit_on_failed_rpc_init 00:06:15.664 ************************************ 00:06:15.664 11:14:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:15.664 00:06:15.664 real 0m14.230s 00:06:15.664 user 0m13.534s 00:06:15.664 sys 0m1.888s 00:06:15.664 11:14:11 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.664 11:14:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.664 ************************************ 00:06:15.664 END TEST skip_rpc 00:06:15.664 ************************************ 00:06:15.664 11:14:11 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:15.664 11:14:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.664 11:14:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.664 11:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.664 ************************************ 00:06:15.664 START TEST rpc_client 00:06:15.664 ************************************ 00:06:15.664 11:14:11 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:15.922 * Looking for test storage... 00:06:15.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:15.922 11:14:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:15.922 OK 00:06:15.922 11:14:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:15.922 00:06:15.922 real 0m0.083s 00:06:15.922 user 0m0.036s 00:06:15.922 sys 0m0.052s 00:06:15.922 11:14:11 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.922 11:14:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:15.922 ************************************ 00:06:15.922 END TEST rpc_client 00:06:15.922 ************************************ 00:06:15.922 11:14:11 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:15.922 11:14:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.922 11:14:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.922 11:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:15.922 ************************************ 00:06:15.922 START TEST json_config 00:06:15.922 ************************************ 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.922 11:14:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.922 11:14:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.922 11:14:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.922 11:14:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.922 11:14:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.922 11:14:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.922 11:14:11 json_config -- paths/export.sh@5 -- # export PATH 00:06:15.922 11:14:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@47 -- # : 0 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.922 11:14:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:15.922 INFO: JSON configuration test init 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.922 11:14:11 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:15.922 11:14:11 json_config -- json_config/common.sh@9 -- # local app=target 00:06:15.922 11:14:11 json_config -- json_config/common.sh@10 -- # shift 00:06:15.922 11:14:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.922 11:14:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.922 11:14:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.922 11:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.922 11:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.922 11:14:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1993552 00:06:15.922 11:14:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:15.922 11:14:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.922 Waiting for target to run... 00:06:15.922 11:14:11 json_config -- json_config/common.sh@25 -- # waitforlisten 1993552 /var/tmp/spdk_tgt.sock 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@831 -- # '[' -z 1993552 ']' 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.922 11:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.923 [2024-07-26 11:14:11.548035] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:15.923 [2024-07-26 11:14:11.548136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993552 ] 00:06:15.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.488 [2024-07-26 11:14:11.934551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.489 [2024-07-26 11:14:12.029141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.054 11:14:12 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.054 11:14:12 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:17.054 11:14:12 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.054 00:06:17.054 11:14:12 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:17.054 11:14:12 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:17.054 11:14:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.054 11:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.054 11:14:12 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:17.054 11:14:12 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:17.054 11:14:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.054 11:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.054 11:14:12 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.054 11:14:12 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:17.054 11:14:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:20.334 11:14:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.334 11:14:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:20.334 11:14:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:20.334 11:14:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@51 -- # sort 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:20.593 11:14:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.593 11:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:20.593 11:14:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.593 11:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:20.593 11:14:16 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:20.593 11:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.158 MallocForNvmf0 00:06:21.158 11:14:16 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.158 11:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.416 MallocForNvmf1 00:06:21.416 11:14:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.416 11:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.982 [2024-07-26 11:14:17.379592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.982 11:14:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:21.982 11:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:22.240 11:14:17 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:22.240 11:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:22.805 11:14:18 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:22.805 11:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:23.062 11:14:18 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:23.062 11:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:23.319 [2024-07-26 11:14:18.900370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.320 11:14:18 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:23.320 11:14:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.320 11:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.320 11:14:18 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:23.320 11:14:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.320 11:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.320 11:14:18 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:23.320 11:14:18 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:23.320 11:14:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:23.884 MallocBdevForConfigChangeCheck 00:06:23.884 11:14:19 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:23.884 11:14:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.884 11:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.884 11:14:19 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:23.884 11:14:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.816 11:14:20 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:24.816 INFO: shutting down applications... 00:06:24.816 11:14:20 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:24.816 11:14:20 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:24.816 11:14:20 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:24.816 11:14:20 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:26.187 Calling clear_iscsi_subsystem 00:06:26.187 Calling clear_nvmf_subsystem 00:06:26.187 Calling clear_nbd_subsystem 00:06:26.187 Calling clear_ublk_subsystem 00:06:26.187 Calling clear_vhost_blk_subsystem 00:06:26.187 Calling clear_vhost_scsi_subsystem 00:06:26.187 Calling clear_bdev_subsystem 00:06:26.187 11:14:21 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:26.187 11:14:21 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:26.187 11:14:21 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:26.187 11:14:21 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.187 11:14:21 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:26.187 11:14:21 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:26.753 11:14:22 json_config -- json_config/json_config.sh@349 -- # break 00:06:26.754 11:14:22 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:26.754 11:14:22 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:26.754 11:14:22 json_config -- json_config/common.sh@31 -- # local app=target 00:06:26.754 11:14:22 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.754 11:14:22 json_config -- json_config/common.sh@35 -- # [[ -n 1993552 ]] 00:06:26.754 11:14:22 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1993552 00:06:26.754 11:14:22 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.754 11:14:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.754 11:14:22 json_config -- json_config/common.sh@41 -- # kill -0 1993552 00:06:26.754 11:14:22 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.322 11:14:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.322 11:14:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.322 11:14:22 json_config -- json_config/common.sh@41 -- # kill -0 1993552 00:06:27.322 11:14:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.322 11:14:22 json_config -- json_config/common.sh@43 -- # break 00:06:27.322 11:14:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.322 11:14:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.322 SPDK target shutdown done 00:06:27.322 11:14:22 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:27.322 INFO: relaunching applications... 00:06:27.322 11:14:22 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.322 11:14:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:27.322 11:14:22 json_config -- json_config/common.sh@10 -- # shift 00:06:27.322 11:14:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.322 11:14:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.322 11:14:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.322 11:14:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.322 11:14:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.322 11:14:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1994924 00:06:27.322 11:14:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.322 11:14:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.322 Waiting for target to run... 00:06:27.322 11:14:22 json_config -- json_config/common.sh@25 -- # waitforlisten 1994924 /var/tmp/spdk_tgt.sock 00:06:27.322 11:14:22 json_config -- common/autotest_common.sh@831 -- # '[' -z 1994924 ']' 00:06:27.322 11:14:22 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.322 11:14:22 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.322 11:14:22 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.322 11:14:22 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.322 11:14:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.322 [2024-07-26 11:14:22.815737] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:27.322 [2024-07-26 11:14:22.815855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994924 ] 00:06:27.322 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.888 [2024-07-26 11:14:23.455965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.147 [2024-07-26 11:14:23.564961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.465 [2024-07-26 11:14:26.611324] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.465 [2024-07-26 11:14:26.643857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.465 11:14:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.465 11:14:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:31.465 11:14:26 json_config -- json_config/common.sh@26 -- # echo '' 00:06:31.465 00:06:31.465 11:14:26 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:31.465 11:14:26 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:31.465 INFO: Checking if target configuration is the same... 00:06:31.465 11:14:26 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.465 11:14:26 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:31.465 11:14:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.465 + '[' 2 -ne 2 ']' 00:06:31.465 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:31.465 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:31.465 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.465 +++ basename /dev/fd/62 00:06:31.465 ++ mktemp /tmp/62.XXX 00:06:31.465 + tmp_file_1=/tmp/62.Ija 00:06:31.465 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.465 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:31.465 + tmp_file_2=/tmp/spdk_tgt_config.json.ffj 00:06:31.465 + ret=0 00:06:31.465 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.465 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.465 + diff -u /tmp/62.Ija /tmp/spdk_tgt_config.json.ffj 00:06:31.465 + echo 'INFO: JSON config files are the same' 00:06:31.465 INFO: JSON config files are the same 00:06:31.465 + rm /tmp/62.Ija /tmp/spdk_tgt_config.json.ffj 00:06:31.465 + exit 0 00:06:31.465 11:14:27 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:31.465 11:14:27 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:31.465 INFO: changing configuration and checking if this can be detected... 00:06:31.465 11:14:27 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:31.465 11:14:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:32.031 11:14:27 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.031 11:14:27 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:32.031 11:14:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.031 + '[' 2 -ne 2 ']' 00:06:32.031 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:32.031 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:32.031 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.031 +++ basename /dev/fd/62 00:06:32.031 ++ mktemp /tmp/62.XXX 00:06:32.031 + tmp_file_1=/tmp/62.0tT 00:06:32.031 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.031 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:32.031 + tmp_file_2=/tmp/spdk_tgt_config.json.xVC 00:06:32.031 + ret=0 00:06:32.031 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:32.289 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:32.547 + diff -u /tmp/62.0tT /tmp/spdk_tgt_config.json.xVC 00:06:32.547 + ret=1 00:06:32.547 + echo '=== Start of file: /tmp/62.0tT ===' 00:06:32.547 + cat /tmp/62.0tT 00:06:32.547 + echo '=== End of file: /tmp/62.0tT ===' 00:06:32.547 + echo '' 00:06:32.547 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xVC ===' 00:06:32.547 + cat /tmp/spdk_tgt_config.json.xVC 00:06:32.547 + echo '=== End of file: /tmp/spdk_tgt_config.json.xVC ===' 00:06:32.547 + echo '' 00:06:32.547 + rm /tmp/62.0tT /tmp/spdk_tgt_config.json.xVC 00:06:32.547 + exit 1 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:32.547 INFO: configuration change detected. 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@321 -- # [[ -n 1994924 ]] 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.547 11:14:27 json_config -- json_config/json_config.sh@327 -- # killprocess 1994924 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@950 -- # '[' -z 1994924 ']' 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@954 -- # kill -0 1994924 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@955 -- # uname 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.547 11:14:27 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1994924 00:06:32.547 11:14:28 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.547 11:14:28 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.547 11:14:28 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1994924' 00:06:32.547 killing process with pid 1994924 00:06:32.547 11:14:28 json_config -- common/autotest_common.sh@969 -- # kill 1994924 00:06:32.547 11:14:28 json_config -- common/autotest_common.sh@974 -- # wait 1994924 00:06:34.447 11:14:29 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.447 11:14:29 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:34.447 11:14:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.447 11:14:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.447 11:14:29 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:34.447 11:14:29 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:34.447 INFO: Success 00:06:34.447 00:06:34.447 real 0m18.280s 00:06:34.447 user 0m22.182s 00:06:34.447 sys 0m2.594s 00:06:34.447 11:14:29 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.447 11:14:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.447 ************************************ 00:06:34.447 END TEST json_config 00:06:34.447 ************************************ 00:06:34.447 11:14:29 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:34.447 11:14:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.447 11:14:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.447 11:14:29 -- common/autotest_common.sh@10 -- # set +x 00:06:34.447 ************************************ 00:06:34.447 START TEST json_config_extra_key 00:06:34.447 ************************************ 00:06:34.447 11:14:29 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.447 11:14:29 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.447 11:14:29 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.447 11:14:29 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.447 11:14:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.447 11:14:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.447 11:14:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.447 11:14:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:34.447 11:14:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:34.447 11:14:29 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:34.447 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:34.448 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:34.448 INFO: launching applications... 00:06:34.448 11:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1995890 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:34.448 Waiting for target to run... 00:06:34.448 11:14:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1995890 /var/tmp/spdk_tgt.sock 00:06:34.448 11:14:29 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1995890 ']' 00:06:34.448 11:14:29 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:34.448 11:14:29 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.448 11:14:29 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:34.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:34.448 11:14:29 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.448 11:14:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.448 [2024-07-26 11:14:29.904790] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:34.448 [2024-07-26 11:14:29.904892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995890 ] 00:06:34.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.014 [2024-07-26 11:14:30.466009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.014 [2024-07-26 11:14:30.575154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.947 11:14:31 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.947 11:14:31 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:35.947 00:06:35.947 11:14:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:35.947 INFO: shutting down applications... 00:06:35.947 11:14:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1995890 ]] 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1995890 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1995890 00:06:35.947 11:14:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.206 11:14:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.206 11:14:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.206 11:14:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1995890 00:06:36.206 11:14:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1995890 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:36.775 11:14:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:36.775 SPDK target shutdown done 00:06:36.775 11:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:36.775 Success 00:06:36.775 00:06:36.775 real 0m2.485s 00:06:36.775 user 0m2.150s 00:06:36.775 sys 0m0.709s 00:06:36.775 11:14:32 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.775 11:14:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.775 ************************************ 00:06:36.775 END TEST json_config_extra_key 00:06:36.775 ************************************ 00:06:36.775 11:14:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:36.775 11:14:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.775 11:14:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.775 11:14:32 -- common/autotest_common.sh@10 -- # set +x 00:06:36.775 ************************************ 00:06:36.775 START TEST alias_rpc 00:06:36.775 ************************************ 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:36.775 * Looking for test storage... 00:06:36.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:36.775 11:14:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.775 11:14:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1996275 00:06:36.775 11:14:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.775 11:14:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1996275 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1996275 ']' 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.775 11:14:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.033 [2024-07-26 11:14:32.468134] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:37.033 [2024-07-26 11:14:32.468253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996275 ] 00:06:37.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.033 [2024-07-26 11:14:32.542281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.034 [2024-07-26 11:14:32.664983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.292 11:14:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.292 11:14:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.292 11:14:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:37.858 11:14:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1996275 00:06:37.858 11:14:33 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1996275 ']' 00:06:37.858 11:14:33 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1996275 00:06:37.858 11:14:33 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.858 11:14:33 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.858 11:14:33 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996275 00:06:38.116 11:14:33 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.116 11:14:33 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.116 11:14:33 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996275' 00:06:38.116 killing process with pid 1996275 00:06:38.116 11:14:33 alias_rpc -- common/autotest_common.sh@969 -- # kill 1996275 00:06:38.116 11:14:33 alias_rpc -- common/autotest_common.sh@974 -- # wait 1996275 00:06:38.374 00:06:38.374 real 0m1.707s 00:06:38.374 user 0m2.067s 00:06:38.374 sys 0m0.534s 00:06:38.374 11:14:34 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.374 11:14:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.374 ************************************ 00:06:38.374 END TEST alias_rpc 00:06:38.374 ************************************ 00:06:38.633 11:14:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:38.633 11:14:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.633 11:14:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.633 11:14:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.633 11:14:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.633 ************************************ 00:06:38.633 START TEST spdkcli_tcp 00:06:38.633 ************************************ 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.633 * Looking for test storage... 00:06:38.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1996473 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:38.633 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1996473 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1996473 ']' 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.633 11:14:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.633 [2024-07-26 11:14:34.222925] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:38.633 [2024-07-26 11:14:34.223034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996473 ] 00:06:38.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.892 [2024-07-26 11:14:34.321222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.892 [2024-07-26 11:14:34.483251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.892 [2024-07-26 11:14:34.483263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.150 11:14:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.150 11:14:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:39.150 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1996602 00:06:39.150 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.150 11:14:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:39.408 [ 00:06:39.408 "bdev_malloc_delete", 00:06:39.408 "bdev_malloc_create", 00:06:39.408 "bdev_null_resize", 00:06:39.408 "bdev_null_delete", 00:06:39.408 "bdev_null_create", 00:06:39.408 "bdev_nvme_cuse_unregister", 00:06:39.408 "bdev_nvme_cuse_register", 00:06:39.408 "bdev_opal_new_user", 00:06:39.408 "bdev_opal_set_lock_state", 00:06:39.408 "bdev_opal_delete", 00:06:39.408 "bdev_opal_get_info", 00:06:39.408 "bdev_opal_create", 00:06:39.408 "bdev_nvme_opal_revert", 00:06:39.408 "bdev_nvme_opal_init", 00:06:39.408 "bdev_nvme_send_cmd", 00:06:39.408 "bdev_nvme_get_path_iostat", 00:06:39.408 "bdev_nvme_get_mdns_discovery_info", 00:06:39.408 "bdev_nvme_stop_mdns_discovery", 00:06:39.408 "bdev_nvme_start_mdns_discovery", 00:06:39.408 "bdev_nvme_set_multipath_policy", 00:06:39.408 "bdev_nvme_set_preferred_path", 00:06:39.408 "bdev_nvme_get_io_paths", 00:06:39.408 "bdev_nvme_remove_error_injection", 00:06:39.408 "bdev_nvme_add_error_injection", 00:06:39.408 "bdev_nvme_get_discovery_info", 00:06:39.408 "bdev_nvme_stop_discovery", 00:06:39.408 "bdev_nvme_start_discovery", 00:06:39.408 "bdev_nvme_get_controller_health_info", 00:06:39.408 "bdev_nvme_disable_controller", 00:06:39.408 "bdev_nvme_enable_controller", 00:06:39.408 "bdev_nvme_reset_controller", 00:06:39.408 "bdev_nvme_get_transport_statistics", 00:06:39.408 "bdev_nvme_apply_firmware", 00:06:39.408 "bdev_nvme_detach_controller", 00:06:39.408 "bdev_nvme_get_controllers", 00:06:39.408 "bdev_nvme_attach_controller", 00:06:39.408 "bdev_nvme_set_hotplug", 00:06:39.408 "bdev_nvme_set_options", 00:06:39.408 "bdev_passthru_delete", 00:06:39.408 "bdev_passthru_create", 00:06:39.408 "bdev_lvol_set_parent_bdev", 00:06:39.408 "bdev_lvol_set_parent", 00:06:39.408 "bdev_lvol_check_shallow_copy", 00:06:39.408 "bdev_lvol_start_shallow_copy", 00:06:39.408 "bdev_lvol_grow_lvstore", 00:06:39.408 "bdev_lvol_get_lvols", 00:06:39.408 "bdev_lvol_get_lvstores", 00:06:39.408 "bdev_lvol_delete", 00:06:39.408 "bdev_lvol_set_read_only", 00:06:39.408 "bdev_lvol_resize", 00:06:39.408 "bdev_lvol_decouple_parent", 00:06:39.408 "bdev_lvol_inflate", 00:06:39.408 "bdev_lvol_rename", 00:06:39.408 "bdev_lvol_clone_bdev", 00:06:39.408 "bdev_lvol_clone", 00:06:39.408 "bdev_lvol_snapshot", 00:06:39.408 "bdev_lvol_create", 00:06:39.408 "bdev_lvol_delete_lvstore", 00:06:39.408 "bdev_lvol_rename_lvstore", 00:06:39.408 "bdev_lvol_create_lvstore", 00:06:39.408 "bdev_raid_set_options", 00:06:39.408 "bdev_raid_remove_base_bdev", 00:06:39.408 "bdev_raid_add_base_bdev", 00:06:39.408 "bdev_raid_delete", 00:06:39.408 "bdev_raid_create", 00:06:39.408 "bdev_raid_get_bdevs", 00:06:39.408 "bdev_error_inject_error", 00:06:39.408 "bdev_error_delete", 00:06:39.408 "bdev_error_create", 00:06:39.408 "bdev_split_delete", 00:06:39.408 "bdev_split_create", 00:06:39.408 "bdev_delay_delete", 00:06:39.408 "bdev_delay_create", 00:06:39.408 "bdev_delay_update_latency", 00:06:39.408 "bdev_zone_block_delete", 00:06:39.408 "bdev_zone_block_create", 00:06:39.408 "blobfs_create", 00:06:39.408 "blobfs_detect", 00:06:39.408 "blobfs_set_cache_size", 00:06:39.408 "bdev_aio_delete", 00:06:39.408 "bdev_aio_rescan", 00:06:39.408 "bdev_aio_create", 00:06:39.408 "bdev_ftl_set_property", 00:06:39.408 "bdev_ftl_get_properties", 00:06:39.408 "bdev_ftl_get_stats", 00:06:39.408 "bdev_ftl_unmap", 00:06:39.408 "bdev_ftl_unload", 00:06:39.408 "bdev_ftl_delete", 00:06:39.408 "bdev_ftl_load", 00:06:39.408 "bdev_ftl_create", 00:06:39.408 "bdev_virtio_attach_controller", 00:06:39.408 "bdev_virtio_scsi_get_devices", 00:06:39.408 "bdev_virtio_detach_controller", 00:06:39.408 "bdev_virtio_blk_set_hotplug", 00:06:39.408 "bdev_iscsi_delete", 00:06:39.408 "bdev_iscsi_create", 00:06:39.408 "bdev_iscsi_set_options", 00:06:39.408 "accel_error_inject_error", 00:06:39.408 "ioat_scan_accel_module", 00:06:39.408 "dsa_scan_accel_module", 00:06:39.408 "iaa_scan_accel_module", 00:06:39.408 "vfu_virtio_create_scsi_endpoint", 00:06:39.408 "vfu_virtio_scsi_remove_target", 00:06:39.408 "vfu_virtio_scsi_add_target", 00:06:39.408 "vfu_virtio_create_blk_endpoint", 00:06:39.408 "vfu_virtio_delete_endpoint", 00:06:39.408 "keyring_file_remove_key", 00:06:39.408 "keyring_file_add_key", 00:06:39.408 "keyring_linux_set_options", 00:06:39.408 "iscsi_get_histogram", 00:06:39.408 "iscsi_enable_histogram", 00:06:39.408 "iscsi_set_options", 00:06:39.408 "iscsi_get_auth_groups", 00:06:39.408 "iscsi_auth_group_remove_secret", 00:06:39.408 "iscsi_auth_group_add_secret", 00:06:39.408 "iscsi_delete_auth_group", 00:06:39.408 "iscsi_create_auth_group", 00:06:39.408 "iscsi_set_discovery_auth", 00:06:39.408 "iscsi_get_options", 00:06:39.408 "iscsi_target_node_request_logout", 00:06:39.408 "iscsi_target_node_set_redirect", 00:06:39.408 "iscsi_target_node_set_auth", 00:06:39.408 "iscsi_target_node_add_lun", 00:06:39.408 "iscsi_get_stats", 00:06:39.408 "iscsi_get_connections", 00:06:39.408 "iscsi_portal_group_set_auth", 00:06:39.408 "iscsi_start_portal_group", 00:06:39.408 "iscsi_delete_portal_group", 00:06:39.408 "iscsi_create_portal_group", 00:06:39.408 "iscsi_get_portal_groups", 00:06:39.409 "iscsi_delete_target_node", 00:06:39.409 "iscsi_target_node_remove_pg_ig_maps", 00:06:39.409 "iscsi_target_node_add_pg_ig_maps", 00:06:39.409 "iscsi_create_target_node", 00:06:39.409 "iscsi_get_target_nodes", 00:06:39.409 "iscsi_delete_initiator_group", 00:06:39.409 "iscsi_initiator_group_remove_initiators", 00:06:39.409 "iscsi_initiator_group_add_initiators", 00:06:39.409 "iscsi_create_initiator_group", 00:06:39.409 "iscsi_get_initiator_groups", 00:06:39.409 "nvmf_set_crdt", 00:06:39.409 "nvmf_set_config", 00:06:39.409 "nvmf_set_max_subsystems", 00:06:39.409 "nvmf_stop_mdns_prr", 00:06:39.409 "nvmf_publish_mdns_prr", 00:06:39.409 "nvmf_subsystem_get_listeners", 00:06:39.409 "nvmf_subsystem_get_qpairs", 00:06:39.409 "nvmf_subsystem_get_controllers", 00:06:39.409 "nvmf_get_stats", 00:06:39.409 "nvmf_get_transports", 00:06:39.409 "nvmf_create_transport", 00:06:39.409 "nvmf_get_targets", 00:06:39.409 "nvmf_delete_target", 00:06:39.409 "nvmf_create_target", 00:06:39.409 "nvmf_subsystem_allow_any_host", 00:06:39.409 "nvmf_subsystem_remove_host", 00:06:39.409 "nvmf_subsystem_add_host", 00:06:39.409 "nvmf_ns_remove_host", 00:06:39.409 "nvmf_ns_add_host", 00:06:39.409 "nvmf_subsystem_remove_ns", 00:06:39.409 "nvmf_subsystem_add_ns", 00:06:39.409 "nvmf_subsystem_listener_set_ana_state", 00:06:39.409 "nvmf_discovery_get_referrals", 00:06:39.409 "nvmf_discovery_remove_referral", 00:06:39.409 "nvmf_discovery_add_referral", 00:06:39.409 "nvmf_subsystem_remove_listener", 00:06:39.409 "nvmf_subsystem_add_listener", 00:06:39.409 "nvmf_delete_subsystem", 00:06:39.409 "nvmf_create_subsystem", 00:06:39.409 "nvmf_get_subsystems", 00:06:39.409 "env_dpdk_get_mem_stats", 00:06:39.409 "nbd_get_disks", 00:06:39.409 "nbd_stop_disk", 00:06:39.409 "nbd_start_disk", 00:06:39.409 "ublk_recover_disk", 00:06:39.409 "ublk_get_disks", 00:06:39.409 "ublk_stop_disk", 00:06:39.409 "ublk_start_disk", 00:06:39.409 "ublk_destroy_target", 00:06:39.409 "ublk_create_target", 00:06:39.409 "virtio_blk_create_transport", 00:06:39.409 "virtio_blk_get_transports", 00:06:39.409 "vhost_controller_set_coalescing", 00:06:39.409 "vhost_get_controllers", 00:06:39.409 "vhost_delete_controller", 00:06:39.409 "vhost_create_blk_controller", 00:06:39.409 "vhost_scsi_controller_remove_target", 00:06:39.409 "vhost_scsi_controller_add_target", 00:06:39.409 "vhost_start_scsi_controller", 00:06:39.409 "vhost_create_scsi_controller", 00:06:39.409 "thread_set_cpumask", 00:06:39.409 "framework_get_governor", 00:06:39.409 "framework_get_scheduler", 00:06:39.409 "framework_set_scheduler", 00:06:39.409 "framework_get_reactors", 00:06:39.409 "thread_get_io_channels", 00:06:39.409 "thread_get_pollers", 00:06:39.409 "thread_get_stats", 00:06:39.409 "framework_monitor_context_switch", 00:06:39.409 "spdk_kill_instance", 00:06:39.409 "log_enable_timestamps", 00:06:39.409 "log_get_flags", 00:06:39.409 "log_clear_flag", 00:06:39.409 "log_set_flag", 00:06:39.409 "log_get_level", 00:06:39.409 "log_set_level", 00:06:39.409 "log_get_print_level", 00:06:39.409 "log_set_print_level", 00:06:39.409 "framework_enable_cpumask_locks", 00:06:39.409 "framework_disable_cpumask_locks", 00:06:39.409 "framework_wait_init", 00:06:39.409 "framework_start_init", 00:06:39.409 "scsi_get_devices", 00:06:39.409 "bdev_get_histogram", 00:06:39.409 "bdev_enable_histogram", 00:06:39.409 "bdev_set_qos_limit", 00:06:39.409 "bdev_set_qd_sampling_period", 00:06:39.409 "bdev_get_bdevs", 00:06:39.409 "bdev_reset_iostat", 00:06:39.409 "bdev_get_iostat", 00:06:39.409 "bdev_examine", 00:06:39.409 "bdev_wait_for_examine", 00:06:39.409 "bdev_set_options", 00:06:39.409 "notify_get_notifications", 00:06:39.409 "notify_get_types", 00:06:39.409 "accel_get_stats", 00:06:39.409 "accel_set_options", 00:06:39.409 "accel_set_driver", 00:06:39.409 "accel_crypto_key_destroy", 00:06:39.409 "accel_crypto_keys_get", 00:06:39.409 "accel_crypto_key_create", 00:06:39.409 "accel_assign_opc", 00:06:39.409 "accel_get_module_info", 00:06:39.409 "accel_get_opc_assignments", 00:06:39.409 "vmd_rescan", 00:06:39.409 "vmd_remove_device", 00:06:39.409 "vmd_enable", 00:06:39.409 "sock_get_default_impl", 00:06:39.409 "sock_set_default_impl", 00:06:39.409 "sock_impl_set_options", 00:06:39.409 "sock_impl_get_options", 00:06:39.409 "iobuf_get_stats", 00:06:39.409 "iobuf_set_options", 00:06:39.409 "keyring_get_keys", 00:06:39.409 "framework_get_pci_devices", 00:06:39.409 "framework_get_config", 00:06:39.409 "framework_get_subsystems", 00:06:39.409 "vfu_tgt_set_base_path", 00:06:39.409 "trace_get_info", 00:06:39.409 "trace_get_tpoint_group_mask", 00:06:39.409 "trace_disable_tpoint_group", 00:06:39.409 "trace_enable_tpoint_group", 00:06:39.409 "trace_clear_tpoint_mask", 00:06:39.409 "trace_set_tpoint_mask", 00:06:39.409 "spdk_get_version", 00:06:39.409 "rpc_get_methods" 00:06:39.409 ] 00:06:39.409 11:14:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:39.409 11:14:35 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.409 11:14:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.667 11:14:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:39.667 11:14:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1996473 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1996473 ']' 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1996473 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996473 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996473' 00:06:39.667 killing process with pid 1996473 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1996473 00:06:39.667 11:14:35 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1996473 00:06:40.232 00:06:40.232 real 0m1.522s 00:06:40.232 user 0m2.724s 00:06:40.232 sys 0m0.522s 00:06:40.232 11:14:35 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.232 11:14:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.232 ************************************ 00:06:40.232 END TEST spdkcli_tcp 00:06:40.232 ************************************ 00:06:40.232 11:14:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.232 11:14:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.232 11:14:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.232 11:14:35 -- common/autotest_common.sh@10 -- # set +x 00:06:40.232 ************************************ 00:06:40.232 START TEST dpdk_mem_utility 00:06:40.232 ************************************ 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.233 * Looking for test storage... 00:06:40.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:40.233 11:14:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.233 11:14:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1996796 00:06:40.233 11:14:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.233 11:14:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1996796 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1996796 ']' 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.233 11:14:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.233 [2024-07-26 11:14:35.801965] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:40.233 [2024-07-26 11:14:35.802066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996796 ] 00:06:40.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.233 [2024-07-26 11:14:35.872303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.491 [2024-07-26 11:14:35.995649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.750 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.750 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:40.750 11:14:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:40.750 11:14:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:40.750 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.750 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.750 { 00:06:40.750 "filename": "/tmp/spdk_mem_dump.txt" 00:06:40.750 } 00:06:40.750 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.750 11:14:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.750 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:40.750 1 heaps totaling size 814.000000 MiB 00:06:40.750 size: 814.000000 MiB heap id: 0 00:06:40.750 end heaps---------- 00:06:40.750 8 mempools totaling size 598.116089 MiB 00:06:40.750 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:40.750 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:40.750 size: 84.521057 MiB name: bdev_io_1996796 00:06:40.750 size: 51.011292 MiB name: evtpool_1996796 00:06:40.750 size: 50.003479 MiB name: msgpool_1996796 00:06:40.750 size: 21.763794 MiB name: PDU_Pool 00:06:40.750 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:40.750 size: 0.026123 MiB name: Session_Pool 00:06:40.750 end mempools------- 00:06:40.750 6 memzones totaling size 4.142822 MiB 00:06:40.750 size: 1.000366 MiB name: RG_ring_0_1996796 00:06:40.750 size: 1.000366 MiB name: RG_ring_1_1996796 00:06:40.750 size: 1.000366 MiB name: RG_ring_4_1996796 00:06:40.750 size: 1.000366 MiB name: RG_ring_5_1996796 00:06:40.750 size: 0.125366 MiB name: RG_ring_2_1996796 00:06:40.750 size: 0.015991 MiB name: RG_ring_3_1996796 00:06:40.750 end memzones------- 00:06:40.750 11:14:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:40.750 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:40.750 list of free elements. size: 12.519348 MiB 00:06:40.750 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:40.750 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:40.750 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:40.750 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:40.750 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:40.750 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:40.750 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:40.750 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:40.750 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:40.750 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:40.750 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:40.750 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:40.750 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:40.750 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:40.750 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:40.750 list of standard malloc elements. size: 199.218079 MiB 00:06:40.750 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:40.750 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:40.750 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:40.750 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:40.750 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:40.750 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:40.750 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:40.750 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:40.751 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:40.751 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:40.751 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:40.751 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:40.751 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:40.751 list of memzone associated elements. size: 602.262573 MiB 00:06:40.751 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:40.751 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:40.751 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:40.751 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:40.751 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:40.751 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1996796_0 00:06:40.751 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:40.751 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1996796_0 00:06:40.751 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:40.751 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1996796_0 00:06:40.751 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:40.751 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:40.751 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:40.751 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:40.751 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:40.751 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1996796 00:06:40.751 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:40.751 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1996796 00:06:40.751 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:40.751 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1996796 00:06:40.751 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:40.751 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:40.751 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:40.751 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:40.751 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:40.751 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:40.751 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:40.751 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:40.751 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:40.751 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1996796 00:06:40.751 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:40.751 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1996796 00:06:40.751 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:40.751 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1996796 00:06:40.751 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:40.751 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1996796 00:06:40.751 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:40.751 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1996796 00:06:40.751 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:40.751 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:40.751 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:40.751 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:40.751 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:40.751 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:40.751 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:40.751 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1996796 00:06:40.751 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:40.751 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:40.751 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:40.751 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:40.751 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:40.751 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1996796 00:06:40.751 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:40.751 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:40.751 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:40.751 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1996796 00:06:40.751 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:40.751 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1996796 00:06:40.751 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:40.751 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:40.751 11:14:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:40.751 11:14:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1996796 00:06:40.751 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1996796 ']' 00:06:40.751 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1996796 00:06:40.751 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:40.751 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.751 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996796 00:06:41.009 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.009 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.009 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996796' 00:06:41.009 killing process with pid 1996796 00:06:41.009 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1996796 00:06:41.009 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1996796 00:06:41.268 00:06:41.268 real 0m1.252s 00:06:41.268 user 0m1.227s 00:06:41.268 sys 0m0.448s 00:06:41.268 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.268 11:14:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.268 ************************************ 00:06:41.268 END TEST dpdk_mem_utility 00:06:41.268 ************************************ 00:06:41.527 11:14:36 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.527 11:14:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.527 11:14:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.527 11:14:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.527 ************************************ 00:06:41.527 START TEST event 00:06:41.527 ************************************ 00:06:41.527 11:14:36 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.527 * Looking for test storage... 00:06:41.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.527 11:14:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:41.527 11:14:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.527 11:14:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.527 11:14:37 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:41.527 11:14:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.527 11:14:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.527 ************************************ 00:06:41.527 START TEST event_perf 00:06:41.527 ************************************ 00:06:41.527 11:14:37 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.527 Running I/O for 1 seconds...[2024-07-26 11:14:37.082795] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:41.527 [2024-07-26 11:14:37.082865] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996986 ] 00:06:41.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.527 [2024-07-26 11:14:37.154060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.785 [2024-07-26 11:14:37.284273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.785 [2024-07-26 11:14:37.284344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.785 [2024-07-26 11:14:37.284415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.785 [2024-07-26 11:14:37.284420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.159 Running I/O for 1 seconds... 00:06:43.159 lcore 0: 210646 00:06:43.159 lcore 1: 210647 00:06:43.159 lcore 2: 210647 00:06:43.159 lcore 3: 210646 00:06:43.159 done. 00:06:43.159 00:06:43.159 real 0m1.345s 00:06:43.159 user 0m4.239s 00:06:43.159 sys 0m0.099s 00:06:43.159 11:14:38 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.159 11:14:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.159 ************************************ 00:06:43.159 END TEST event_perf 00:06:43.159 ************************************ 00:06:43.159 11:14:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.159 11:14:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:43.159 11:14:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.159 11:14:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.159 ************************************ 00:06:43.159 START TEST event_reactor 00:06:43.159 ************************************ 00:06:43.159 11:14:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.159 [2024-07-26 11:14:38.503879] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:43.159 [2024-07-26 11:14:38.503957] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997150 ] 00:06:43.159 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.159 [2024-07-26 11:14:38.600563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.159 [2024-07-26 11:14:38.724584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.533 test_start 00:06:44.533 oneshot 00:06:44.533 tick 100 00:06:44.533 tick 100 00:06:44.533 tick 250 00:06:44.533 tick 100 00:06:44.533 tick 100 00:06:44.533 tick 100 00:06:44.533 tick 250 00:06:44.533 tick 500 00:06:44.533 tick 100 00:06:44.533 tick 100 00:06:44.533 tick 250 00:06:44.533 tick 100 00:06:44.533 tick 100 00:06:44.533 test_end 00:06:44.533 00:06:44.533 real 0m1.370s 00:06:44.533 user 0m1.253s 00:06:44.533 sys 0m0.111s 00:06:44.533 11:14:39 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.533 11:14:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.533 ************************************ 00:06:44.533 END TEST event_reactor 00:06:44.533 ************************************ 00:06:44.533 11:14:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.533 11:14:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:44.533 11:14:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.533 11:14:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.533 ************************************ 00:06:44.533 START TEST event_reactor_perf 00:06:44.533 ************************************ 00:06:44.533 11:14:39 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.533 [2024-07-26 11:14:39.922865] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:44.533 [2024-07-26 11:14:39.922943] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997309 ] 00:06:44.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.533 [2024-07-26 11:14:39.996419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.533 [2024-07-26 11:14:40.124160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.959 test_start 00:06:45.959 test_end 00:06:45.959 Performance: 356981 events per second 00:06:45.959 00:06:45.959 real 0m1.346s 00:06:45.959 user 0m1.244s 00:06:45.959 sys 0m0.096s 00:06:45.959 11:14:41 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.959 11:14:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.959 ************************************ 00:06:45.959 END TEST event_reactor_perf 00:06:45.959 ************************************ 00:06:45.959 11:14:41 event -- event/event.sh@49 -- # uname -s 00:06:45.959 11:14:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.959 11:14:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.959 11:14:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.959 11:14:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.959 11:14:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.959 ************************************ 00:06:45.959 START TEST event_scheduler 00:06:45.959 ************************************ 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.959 * Looking for test storage... 00:06:45.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.959 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.959 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1997511 00:06:45.959 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.959 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.959 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1997511 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1997511 ']' 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.959 11:14:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.959 [2024-07-26 11:14:41.446175] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:45.959 [2024-07-26 11:14:41.446277] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997511 ] 00:06:45.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.959 [2024-07-26 11:14:41.542641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.219 [2024-07-26 11:14:41.740683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.219 [2024-07-26 11:14:41.740736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.219 [2024-07-26 11:14:41.740788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.219 [2024-07-26 11:14:41.740791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:46.219 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.219 [2024-07-26 11:14:41.837762] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:46.219 [2024-07-26 11:14:41.837792] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.219 [2024-07-26 11:14:41.837812] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.219 [2024-07-26 11:14:41.837825] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.219 [2024-07-26 11:14:41.837836] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.219 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.219 11:14:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.478 [2024-07-26 11:14:41.998543] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.478 11:14:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.478 11:14:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.479 11:14:41 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.479 11:14:41 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.479 11:14:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 ************************************ 00:06:46.479 START TEST scheduler_create_thread 00:06:46.479 ************************************ 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 2 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 3 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 4 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 5 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 6 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 7 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 8 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 9 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 10 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.479 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.738 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.738 11:14:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.738 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.738 11:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.114 11:14:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.114 11:14:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.114 11:14:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.114 11:14:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.114 11:14:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.046 11:14:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.046 00:06:49.046 real 0m2.620s 00:06:49.046 user 0m0.016s 00:06:49.046 sys 0m0.005s 00:06:49.047 11:14:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.047 11:14:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.047 ************************************ 00:06:49.047 END TEST scheduler_create_thread 00:06:49.047 ************************************ 00:06:49.047 11:14:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.047 11:14:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1997511 00:06:49.047 11:14:44 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1997511 ']' 00:06:49.047 11:14:44 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1997511 00:06:49.047 11:14:44 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:49.047 11:14:44 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.047 11:14:44 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1997511 00:06:49.304 11:14:44 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:49.304 11:14:44 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:49.304 11:14:44 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1997511' 00:06:49.304 killing process with pid 1997511 00:06:49.304 11:14:44 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1997511 00:06:49.304 11:14:44 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1997511 00:06:49.562 [2024-07-26 11:14:45.138511] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.129 00:06:50.129 real 0m4.180s 00:06:50.129 user 0m6.283s 00:06:50.129 sys 0m0.497s 00:06:50.129 11:14:45 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.129 11:14:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.129 ************************************ 00:06:50.129 END TEST event_scheduler 00:06:50.129 ************************************ 00:06:50.129 11:14:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.129 11:14:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.129 11:14:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.129 11:14:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.129 11:14:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.129 ************************************ 00:06:50.129 START TEST app_repeat 00:06:50.129 ************************************ 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1998065 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1998065' 00:06:50.129 Process app_repeat pid: 1998065 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.129 spdk_app_start Round 0 00:06:50.129 11:14:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1998065 /var/tmp/spdk-nbd.sock 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1998065 ']' 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.129 11:14:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.129 [2024-07-26 11:14:45.609000] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:06:50.129 [2024-07-26 11:14:45.609132] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998065 ] 00:06:50.129 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.129 [2024-07-26 11:14:45.709175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.388 [2024-07-26 11:14:45.837773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.388 [2024-07-26 11:14:45.837777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.645 11:14:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.645 11:14:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:50.645 11:14:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.211 Malloc0 00:06:51.211 11:14:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.468 Malloc1 00:06:51.468 11:14:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.468 11:14:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.033 /dev/nbd0 00:06:52.033 11:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.033 11:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.033 11:14:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.033 1+0 records in 00:06:52.033 1+0 records out 00:06:52.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164191 s, 24.9 MB/s 00:06:52.034 11:14:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.034 11:14:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.034 11:14:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.034 11:14:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.034 11:14:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.034 11:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.034 11:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.034 11:14:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.291 /dev/nbd1 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.291 1+0 records in 00:06:52.291 1+0 records out 00:06:52.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212592 s, 19.3 MB/s 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.291 11:14:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.291 11:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.854 { 00:06:52.854 "nbd_device": "/dev/nbd0", 00:06:52.854 "bdev_name": "Malloc0" 00:06:52.854 }, 00:06:52.854 { 00:06:52.854 "nbd_device": "/dev/nbd1", 00:06:52.854 "bdev_name": "Malloc1" 00:06:52.854 } 00:06:52.854 ]' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.854 { 00:06:52.854 "nbd_device": "/dev/nbd0", 00:06:52.854 "bdev_name": "Malloc0" 00:06:52.854 }, 00:06:52.854 { 00:06:52.854 "nbd_device": "/dev/nbd1", 00:06:52.854 "bdev_name": "Malloc1" 00:06:52.854 } 00:06:52.854 ]' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.854 /dev/nbd1' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.854 /dev/nbd1' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.854 256+0 records in 00:06:52.854 256+0 records out 00:06:52.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00871834 s, 120 MB/s 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.854 256+0 records in 00:06:52.854 256+0 records out 00:06:52.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240381 s, 43.6 MB/s 00:06:52.854 11:14:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.855 256+0 records in 00:06:52.855 256+0 records out 00:06:52.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269707 s, 38.9 MB/s 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.855 11:14:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.419 11:14:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.983 11:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.240 11:14:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.240 11:14:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.498 11:14:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.063 [2024-07-26 11:14:50.431044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.063 [2024-07-26 11:14:50.552069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.063 [2024-07-26 11:14:50.552069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.063 [2024-07-26 11:14:50.613253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.063 [2024-07-26 11:14:50.613335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.651 11:14:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.651 11:14:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.651 spdk_app_start Round 1 00:06:57.651 11:14:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1998065 /var/tmp/spdk-nbd.sock 00:06:57.651 11:14:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1998065 ']' 00:06:57.651 11:14:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.651 11:14:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.651 11:14:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.651 11:14:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.651 11:14:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.909 11:14:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.909 11:14:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:57.909 11:14:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.474 Malloc0 00:06:58.474 11:14:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.039 Malloc1 00:06:59.039 11:14:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.039 11:14:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.605 /dev/nbd0 00:06:59.605 11:14:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.605 11:14:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.605 1+0 records in 00:06:59.605 1+0 records out 00:06:59.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158066 s, 25.9 MB/s 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.605 11:14:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:59.882 11:14:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.882 11:14:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.882 11:14:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:59.882 11:14:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.882 11:14:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.882 11:14:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.155 /dev/nbd1 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.155 1+0 records in 00:07:00.155 1+0 records out 00:07:00.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219561 s, 18.7 MB/s 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.155 11:14:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.155 11:14:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.413 11:14:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.413 { 00:07:00.413 "nbd_device": "/dev/nbd0", 00:07:00.413 "bdev_name": "Malloc0" 00:07:00.413 }, 00:07:00.413 { 00:07:00.413 "nbd_device": "/dev/nbd1", 00:07:00.413 "bdev_name": "Malloc1" 00:07:00.413 } 00:07:00.413 ]' 00:07:00.413 11:14:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.413 { 00:07:00.413 "nbd_device": "/dev/nbd0", 00:07:00.413 "bdev_name": "Malloc0" 00:07:00.413 }, 00:07:00.413 { 00:07:00.413 "nbd_device": "/dev/nbd1", 00:07:00.413 "bdev_name": "Malloc1" 00:07:00.413 } 00:07:00.413 ]' 00:07:00.413 11:14:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.413 /dev/nbd1' 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.413 /dev/nbd1' 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.413 256+0 records in 00:07:00.413 256+0 records out 00:07:00.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815829 s, 129 MB/s 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.413 256+0 records in 00:07:00.413 256+0 records out 00:07:00.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273233 s, 38.4 MB/s 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.413 11:14:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.671 256+0 records in 00:07:00.671 256+0 records out 00:07:00.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259527 s, 40.4 MB/s 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.671 11:14:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.929 11:14:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.187 11:14:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.752 11:14:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.753 11:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.753 11:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.010 11:14:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.010 11:14:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.268 11:14:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.527 [2024-07-26 11:14:58.071621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.785 [2024-07-26 11:14:58.192109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.785 [2024-07-26 11:14:58.192114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.785 [2024-07-26 11:14:58.255912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.785 [2024-07-26 11:14:58.255995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.310 11:15:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.310 11:15:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:05.310 spdk_app_start Round 2 00:07:05.310 11:15:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1998065 /var/tmp/spdk-nbd.sock 00:07:05.310 11:15:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1998065 ']' 00:07:05.310 11:15:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.310 11:15:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.310 11:15:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.310 11:15:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.310 11:15:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.569 11:15:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.569 11:15:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:05.569 11:15:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.134 Malloc0 00:07:06.134 11:15:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.703 Malloc1 00:07:06.703 11:15:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.703 11:15:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.704 11:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.270 /dev/nbd0 00:07:07.270 11:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.270 11:15:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.270 1+0 records in 00:07:07.270 1+0 records out 00:07:07.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199526 s, 20.5 MB/s 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.270 11:15:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:07.270 11:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.270 11:15:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.270 11:15:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.836 /dev/nbd1 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.836 1+0 records in 00:07:07.836 1+0 records out 00:07:07.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232918 s, 17.6 MB/s 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.836 11:15:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.836 11:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.095 { 00:07:08.095 "nbd_device": "/dev/nbd0", 00:07:08.095 "bdev_name": "Malloc0" 00:07:08.095 }, 00:07:08.095 { 00:07:08.095 "nbd_device": "/dev/nbd1", 00:07:08.095 "bdev_name": "Malloc1" 00:07:08.095 } 00:07:08.095 ]' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.095 { 00:07:08.095 "nbd_device": "/dev/nbd0", 00:07:08.095 "bdev_name": "Malloc0" 00:07:08.095 }, 00:07:08.095 { 00:07:08.095 "nbd_device": "/dev/nbd1", 00:07:08.095 "bdev_name": "Malloc1" 00:07:08.095 } 00:07:08.095 ]' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.095 /dev/nbd1' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.095 /dev/nbd1' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.095 256+0 records in 00:07:08.095 256+0 records out 00:07:08.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521354 s, 201 MB/s 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.095 256+0 records in 00:07:08.095 256+0 records out 00:07:08.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024348 s, 43.1 MB/s 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.095 11:15:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.353 256+0 records in 00:07:08.353 256+0 records out 00:07:08.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254187 s, 41.3 MB/s 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.353 11:15:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.610 11:15:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.868 11:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.433 11:15:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.433 11:15:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.999 11:15:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.257 [2024-07-26 11:15:05.691595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.257 [2024-07-26 11:15:05.812426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.257 [2024-07-26 11:15:05.812439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.257 [2024-07-26 11:15:05.875406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.257 [2024-07-26 11:15:05.875476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.786 11:15:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1998065 /var/tmp/spdk-nbd.sock 00:07:12.786 11:15:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1998065 ']' 00:07:12.786 11:15:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.786 11:15:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.786 11:15:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.786 11:15:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.786 11:15:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:13.352 11:15:08 event.app_repeat -- event/event.sh@39 -- # killprocess 1998065 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1998065 ']' 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1998065 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1998065 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1998065' 00:07:13.352 killing process with pid 1998065 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1998065 00:07:13.352 11:15:08 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1998065 00:07:13.611 spdk_app_start is called in Round 0. 00:07:13.611 Shutdown signal received, stop current app iteration 00:07:13.611 Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 reinitialization... 00:07:13.611 spdk_app_start is called in Round 1. 00:07:13.611 Shutdown signal received, stop current app iteration 00:07:13.611 Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 reinitialization... 00:07:13.611 spdk_app_start is called in Round 2. 00:07:13.611 Shutdown signal received, stop current app iteration 00:07:13.611 Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 reinitialization... 00:07:13.611 spdk_app_start is called in Round 3. 00:07:13.611 Shutdown signal received, stop current app iteration 00:07:13.611 11:15:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:13.611 11:15:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:13.611 00:07:13.611 real 0m23.512s 00:07:13.611 user 0m53.995s 00:07:13.611 sys 0m4.600s 00:07:13.611 11:15:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.611 11:15:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.611 ************************************ 00:07:13.611 END TEST app_repeat 00:07:13.611 ************************************ 00:07:13.611 11:15:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:13.611 11:15:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:13.611 11:15:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.611 11:15:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.611 11:15:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.611 ************************************ 00:07:13.611 START TEST cpu_locks 00:07:13.611 ************************************ 00:07:13.611 11:15:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:13.611 * Looking for test storage... 00:07:13.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:13.611 11:15:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:13.611 11:15:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:13.611 11:15:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:13.611 11:15:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:13.611 11:15:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.611 11:15:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.611 11:15:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.611 ************************************ 00:07:13.611 START TEST default_locks 00:07:13.611 ************************************ 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2001580 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2001580 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2001580 ']' 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.611 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.870 [2024-07-26 11:15:09.273372] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:13.870 [2024-07-26 11:15:09.273489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001580 ] 00:07:13.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.870 [2024-07-26 11:15:09.341336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.870 [2024-07-26 11:15:09.463711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.128 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.128 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:14.128 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2001580 00:07:14.128 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2001580 00:07:14.128 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.386 lslocks: write error 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2001580 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2001580 ']' 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2001580 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2001580 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2001580' 00:07:14.386 killing process with pid 2001580 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2001580 00:07:14.386 11:15:09 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2001580 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2001580 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2001580 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2001580 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2001580 ']' 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2001580) - No such process 00:07:14.977 ERROR: process (pid: 2001580) is no longer running 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.977 00:07:14.977 real 0m1.242s 00:07:14.977 user 0m1.212s 00:07:14.977 sys 0m0.560s 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.977 11:15:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.977 ************************************ 00:07:14.977 END TEST default_locks 00:07:14.977 ************************************ 00:07:14.977 11:15:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:14.977 11:15:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.977 11:15:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.977 11:15:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.977 ************************************ 00:07:14.977 START TEST default_locks_via_rpc 00:07:14.977 ************************************ 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2001849 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2001849 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2001849 ']' 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.977 11:15:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.242 [2024-07-26 11:15:10.623226] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:15.242 [2024-07-26 11:15:10.623333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001849 ] 00:07:15.242 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.242 [2024-07-26 11:15:10.696661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.242 [2024-07-26 11:15:10.822215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2001849 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2001849 00:07:15.500 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.757 11:15:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2001849 00:07:15.757 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2001849 ']' 00:07:15.757 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2001849 00:07:15.757 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:15.757 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.757 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2001849 00:07:16.015 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.015 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.015 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2001849' 00:07:16.015 killing process with pid 2001849 00:07:16.015 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2001849 00:07:16.015 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2001849 00:07:16.273 00:07:16.273 real 0m1.394s 00:07:16.273 user 0m1.356s 00:07:16.273 sys 0m0.593s 00:07:16.273 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.273 11:15:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.273 ************************************ 00:07:16.273 END TEST default_locks_via_rpc 00:07:16.273 ************************************ 00:07:16.532 11:15:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:16.532 11:15:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.532 11:15:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.532 11:15:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.532 ************************************ 00:07:16.532 START TEST non_locking_app_on_locked_coremask 00:07:16.532 ************************************ 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2002021 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2002021 /var/tmp/spdk.sock 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2002021 ']' 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.532 11:15:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.532 [2024-07-26 11:15:12.050298] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:16.532 [2024-07-26 11:15:12.050402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002021 ] 00:07:16.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.532 [2024-07-26 11:15:12.126303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.790 [2024-07-26 11:15:12.252119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2002095 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2002095 /var/tmp/spdk2.sock 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2002095 ']' 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.048 11:15:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.048 [2024-07-26 11:15:12.570360] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:17.048 [2024-07-26 11:15:12.570463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002095 ] 00:07:17.048 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.048 [2024-07-26 11:15:12.665511] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.048 [2024-07-26 11:15:12.665543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.306 [2024-07-26 11:15:12.914760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.238 11:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.238 11:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:18.238 11:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2002021 00:07:18.238 11:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2002021 00:07:18.238 11:15:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.803 lslocks: write error 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2002021 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2002021 ']' 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2002021 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2002021 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.803 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.804 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2002021' 00:07:18.804 killing process with pid 2002021 00:07:18.804 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2002021 00:07:18.804 11:15:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2002021 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2002095 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2002095 ']' 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2002095 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2002095 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2002095' 00:07:19.762 killing process with pid 2002095 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2002095 00:07:19.762 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2002095 00:07:20.327 00:07:20.327 real 0m3.788s 00:07:20.327 user 0m4.073s 00:07:20.327 sys 0m1.204s 00:07:20.327 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.327 11:15:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.328 ************************************ 00:07:20.328 END TEST non_locking_app_on_locked_coremask 00:07:20.328 ************************************ 00:07:20.328 11:15:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:20.328 11:15:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.328 11:15:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.328 11:15:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.328 ************************************ 00:07:20.328 START TEST locking_app_on_unlocked_coremask 00:07:20.328 ************************************ 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2002458 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2002458 /var/tmp/spdk.sock 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2002458 ']' 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.328 11:15:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.328 [2024-07-26 11:15:15.895778] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:20.328 [2024-07-26 11:15:15.895882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002458 ] 00:07:20.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.328 [2024-07-26 11:15:15.969541] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.328 [2024-07-26 11:15:15.969587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.586 [2024-07-26 11:15:16.093228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2002587 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2002587 /var/tmp/spdk2.sock 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2002587 ']' 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.844 11:15:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.844 [2024-07-26 11:15:16.429989] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:20.844 [2024-07-26 11:15:16.430106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002587 ] 00:07:20.844 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.102 [2024-07-26 11:15:16.538038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.359 [2024-07-26 11:15:16.787247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.925 11:15:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.925 11:15:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:21.925 11:15:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2002587 00:07:21.925 11:15:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2002587 00:07:21.925 11:15:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.299 lslocks: write error 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2002458 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2002458 ']' 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2002458 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2002458 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2002458' 00:07:23.299 killing process with pid 2002458 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2002458 00:07:23.299 11:15:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2002458 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2002587 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2002587 ']' 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2002587 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2002587 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2002587' 00:07:24.232 killing process with pid 2002587 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2002587 00:07:24.232 11:15:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2002587 00:07:24.797 00:07:24.797 real 0m4.338s 00:07:24.797 user 0m4.637s 00:07:24.797 sys 0m1.534s 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.797 ************************************ 00:07:24.797 END TEST locking_app_on_unlocked_coremask 00:07:24.797 ************************************ 00:07:24.797 11:15:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:24.797 11:15:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.797 11:15:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.797 11:15:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.797 ************************************ 00:07:24.797 START TEST locking_app_on_locked_coremask 00:07:24.797 ************************************ 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2003024 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2003024 /var/tmp/spdk.sock 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2003024 ']' 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.797 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.797 [2024-07-26 11:15:20.304258] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:24.797 [2024-07-26 11:15:20.304375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003024 ] 00:07:24.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.798 [2024-07-26 11:15:20.378403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.055 [2024-07-26 11:15:20.500961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2003147 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2003147 /var/tmp/spdk2.sock 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2003147 /var/tmp/spdk2.sock 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2003147 /var/tmp/spdk2.sock 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2003147 ']' 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.314 11:15:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.314 [2024-07-26 11:15:20.838380] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:25.314 [2024-07-26 11:15:20.838500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003147 ] 00:07:25.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.314 [2024-07-26 11:15:20.948163] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2003024 has claimed it. 00:07:25.314 [2024-07-26 11:15:20.948224] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2003147) - No such process 00:07:26.247 ERROR: process (pid: 2003147) is no longer running 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2003024 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2003024 00:07:26.247 11:15:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.813 lslocks: write error 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2003024 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2003024 ']' 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2003024 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2003024 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2003024' 00:07:26.813 killing process with pid 2003024 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2003024 00:07:26.813 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2003024 00:07:27.380 00:07:27.380 real 0m2.578s 00:07:27.380 user 0m3.070s 00:07:27.380 sys 0m0.764s 00:07:27.380 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.380 11:15:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.380 ************************************ 00:07:27.380 END TEST locking_app_on_locked_coremask 00:07:27.380 ************************************ 00:07:27.380 11:15:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:27.380 11:15:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.380 11:15:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.380 11:15:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.380 ************************************ 00:07:27.380 START TEST locking_overlapped_coremask 00:07:27.380 ************************************ 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2003437 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2003437 /var/tmp/spdk.sock 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2003437 ']' 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.380 11:15:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.380 [2024-07-26 11:15:22.950863] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:27.380 [2024-07-26 11:15:22.950983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003437 ] 00:07:27.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.380 [2024-07-26 11:15:23.027223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.637 [2024-07-26 11:15:23.155255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.637 [2024-07-26 11:15:23.155346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.637 [2024-07-26 11:15:23.155350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2003467 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2003467 /var/tmp/spdk2.sock 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2003467 /var/tmp/spdk2.sock 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2003467 /var/tmp/spdk2.sock 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2003467 ']' 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.894 11:15:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.894 [2024-07-26 11:15:23.491370] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:27.894 [2024-07-26 11:15:23.491505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003467 ] 00:07:27.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.151 [2024-07-26 11:15:23.591601] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2003437 has claimed it. 00:07:28.151 [2024-07-26 11:15:23.591658] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:28.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2003467) - No such process 00:07:28.715 ERROR: process (pid: 2003467) is no longer running 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2003437 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2003437 ']' 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2003437 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2003437 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2003437' 00:07:28.715 killing process with pid 2003437 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2003437 00:07:28.715 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2003437 00:07:29.311 00:07:29.311 real 0m1.870s 00:07:29.311 user 0m4.947s 00:07:29.311 sys 0m0.525s 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.312 ************************************ 00:07:29.312 END TEST locking_overlapped_coremask 00:07:29.312 ************************************ 00:07:29.312 11:15:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:29.312 11:15:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.312 11:15:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.312 11:15:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.312 ************************************ 00:07:29.312 START TEST locking_overlapped_coremask_via_rpc 00:07:29.312 ************************************ 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2003635 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2003635 /var/tmp/spdk.sock 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2003635 ']' 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.312 11:15:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.312 [2024-07-26 11:15:24.874444] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:29.312 [2024-07-26 11:15:24.874556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003635 ] 00:07:29.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.312 [2024-07-26 11:15:24.953126] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.312 [2024-07-26 11:15:24.953168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.570 [2024-07-26 11:15:25.080954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.570 [2024-07-26 11:15:25.081007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.570 [2024-07-26 11:15:25.081011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2003766 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2003766 /var/tmp/spdk2.sock 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2003766 ']' 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.828 11:15:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.828 [2024-07-26 11:15:25.422626] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:29.828 [2024-07-26 11:15:25.422744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003766 ] 00:07:29.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.085 [2024-07-26 11:15:25.529543] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.085 [2024-07-26 11:15:25.529583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.343 [2024-07-26 11:15:25.784788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.343 [2024-07-26 11:15:25.788480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:30.343 [2024-07-26 11:15:25.788484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.909 [2024-07-26 11:15:26.346549] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2003635 has claimed it. 00:07:30.909 request: 00:07:30.909 { 00:07:30.909 "method": "framework_enable_cpumask_locks", 00:07:30.909 "req_id": 1 00:07:30.909 } 00:07:30.909 Got JSON-RPC error response 00:07:30.909 response: 00:07:30.909 { 00:07:30.909 "code": -32603, 00:07:30.909 "message": "Failed to claim CPU core: 2" 00:07:30.909 } 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2003635 /var/tmp/spdk.sock 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2003635 ']' 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.909 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2003766 /var/tmp/spdk2.sock 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2003766 ']' 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.167 11:15:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:31.732 00:07:31.732 real 0m2.511s 00:07:31.732 user 0m1.510s 00:07:31.732 sys 0m0.272s 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.732 11:15:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.732 ************************************ 00:07:31.732 END TEST locking_overlapped_coremask_via_rpc 00:07:31.732 ************************************ 00:07:31.732 11:15:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:31.732 11:15:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2003635 ]] 00:07:31.732 11:15:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2003635 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2003635 ']' 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2003635 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2003635 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2003635' 00:07:31.732 killing process with pid 2003635 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2003635 00:07:31.732 11:15:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2003635 00:07:32.297 11:15:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2003766 ]] 00:07:32.297 11:15:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2003766 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2003766 ']' 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2003766 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2003766 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:32.297 11:15:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2003766' 00:07:32.297 killing process with pid 2003766 00:07:32.298 11:15:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2003766 00:07:32.298 11:15:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2003766 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2003635 ]] 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2003635 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2003635 ']' 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2003635 00:07:33.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2003635) - No such process 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2003635 is not found' 00:07:33.234 Process with pid 2003635 is not found 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2003766 ]] 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2003766 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2003766 ']' 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2003766 00:07:33.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2003766) - No such process 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2003766 is not found' 00:07:33.234 Process with pid 2003766 is not found 00:07:33.234 11:15:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.234 00:07:33.234 real 0m19.412s 00:07:33.234 user 0m34.608s 00:07:33.234 sys 0m6.531s 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.234 11:15:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.234 ************************************ 00:07:33.234 END TEST cpu_locks 00:07:33.234 ************************************ 00:07:33.234 00:07:33.234 real 0m51.593s 00:07:33.234 user 1m41.796s 00:07:33.234 sys 0m12.212s 00:07:33.234 11:15:28 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.234 11:15:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.234 ************************************ 00:07:33.234 END TEST event 00:07:33.234 ************************************ 00:07:33.234 11:15:28 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.234 11:15:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.234 11:15:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.234 11:15:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.234 ************************************ 00:07:33.234 START TEST thread 00:07:33.234 ************************************ 00:07:33.235 11:15:28 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:33.235 * Looking for test storage... 00:07:33.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:33.235 11:15:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.235 11:15:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:33.235 11:15:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.235 11:15:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.235 ************************************ 00:07:33.235 START TEST thread_poller_perf 00:07:33.235 ************************************ 00:07:33.235 11:15:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.235 [2024-07-26 11:15:28.749227] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:33.235 [2024-07-26 11:15:28.749293] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004257 ] 00:07:33.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.235 [2024-07-26 11:15:28.817342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.493 [2024-07-26 11:15:28.943231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.493 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:34.427 ====================================== 00:07:34.427 busy:2712371644 (cyc) 00:07:34.427 total_run_count: 292000 00:07:34.427 tsc_hz: 2700000000 (cyc) 00:07:34.427 ====================================== 00:07:34.427 poller_cost: 9288 (cyc), 3440 (nsec) 00:07:34.427 00:07:34.427 real 0m1.346s 00:07:34.427 user 0m1.250s 00:07:34.427 sys 0m0.089s 00:07:34.427 11:15:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.427 11:15:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.427 ************************************ 00:07:34.427 END TEST thread_poller_perf 00:07:34.427 ************************************ 00:07:34.686 11:15:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.686 11:15:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:34.686 11:15:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.686 11:15:30 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.686 ************************************ 00:07:34.686 START TEST thread_poller_perf 00:07:34.686 ************************************ 00:07:34.686 11:15:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.686 [2024-07-26 11:15:30.157902] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:34.686 [2024-07-26 11:15:30.158047] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004413 ] 00:07:34.686 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.686 [2024-07-26 11:15:30.247423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.944 [2024-07-26 11:15:30.371533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.944 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:35.879 ====================================== 00:07:35.879 busy:2702685868 (cyc) 00:07:35.879 total_run_count: 3854000 00:07:35.879 tsc_hz: 2700000000 (cyc) 00:07:35.879 ====================================== 00:07:35.879 poller_cost: 701 (cyc), 259 (nsec) 00:07:35.879 00:07:35.879 real 0m1.366s 00:07:35.879 user 0m1.248s 00:07:35.879 sys 0m0.111s 00:07:35.879 11:15:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.879 11:15:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.879 ************************************ 00:07:35.879 END TEST thread_poller_perf 00:07:35.879 ************************************ 00:07:35.879 11:15:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:35.879 00:07:35.879 real 0m2.896s 00:07:35.879 user 0m2.582s 00:07:35.879 sys 0m0.314s 00:07:35.879 11:15:31 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.879 11:15:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.879 ************************************ 00:07:35.879 END TEST thread 00:07:35.879 ************************************ 00:07:36.137 11:15:31 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:36.137 11:15:31 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.137 11:15:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.137 11:15:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.137 11:15:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.137 ************************************ 00:07:36.137 START TEST app_cmdline 00:07:36.137 ************************************ 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.137 * Looking for test storage... 00:07:36.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.137 11:15:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.137 11:15:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2004611 00:07:36.137 11:15:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.137 11:15:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2004611 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2004611 ']' 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.137 11:15:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.137 [2024-07-26 11:15:31.721865] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:36.137 [2024-07-26 11:15:31.721977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004611 ] 00:07:36.137 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.137 [2024-07-26 11:15:31.794064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.395 [2024-07-26 11:15:31.916183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.653 11:15:32 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.653 11:15:32 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:36.653 11:15:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.218 { 00:07:37.218 "version": "SPDK v24.09-pre git sha1 064b11df7", 00:07:37.218 "fields": { 00:07:37.218 "major": 24, 00:07:37.218 "minor": 9, 00:07:37.218 "patch": 0, 00:07:37.218 "suffix": "-pre", 00:07:37.218 "commit": "064b11df7" 00:07:37.218 } 00:07:37.218 } 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:37.219 11:15:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.219 11:15:32 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.477 request: 00:07:37.477 { 00:07:37.477 "method": "env_dpdk_get_mem_stats", 00:07:37.477 "req_id": 1 00:07:37.477 } 00:07:37.477 Got JSON-RPC error response 00:07:37.477 response: 00:07:37.477 { 00:07:37.477 "code": -32601, 00:07:37.477 "message": "Method not found" 00:07:37.477 } 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.477 11:15:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2004611 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2004611 ']' 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2004611 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2004611 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2004611' 00:07:37.477 killing process with pid 2004611 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@969 -- # kill 2004611 00:07:37.477 11:15:33 app_cmdline -- common/autotest_common.sh@974 -- # wait 2004611 00:07:38.045 00:07:38.045 real 0m1.985s 00:07:38.045 user 0m2.647s 00:07:38.045 sys 0m0.527s 00:07:38.045 11:15:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.045 11:15:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.045 ************************************ 00:07:38.045 END TEST app_cmdline 00:07:38.045 ************************************ 00:07:38.045 11:15:33 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:38.045 11:15:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.045 11:15:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.045 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.045 ************************************ 00:07:38.045 START TEST version 00:07:38.045 ************************************ 00:07:38.045 11:15:33 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:38.303 * Looking for test storage... 00:07:38.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.303 11:15:33 version -- app/version.sh@17 -- # get_header_version major 00:07:38.303 11:15:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # cut -f2 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.303 11:15:33 version -- app/version.sh@17 -- # major=24 00:07:38.303 11:15:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:38.303 11:15:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # cut -f2 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.303 11:15:33 version -- app/version.sh@18 -- # minor=9 00:07:38.303 11:15:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:38.303 11:15:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # cut -f2 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.303 11:15:33 version -- app/version.sh@19 -- # patch=0 00:07:38.303 11:15:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:38.303 11:15:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # cut -f2 00:07:38.303 11:15:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.303 11:15:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:38.303 11:15:33 version -- app/version.sh@22 -- # version=24.9 00:07:38.303 11:15:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.303 11:15:33 version -- app/version.sh@28 -- # version=24.9rc0 00:07:38.303 11:15:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:38.303 11:15:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.303 11:15:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:38.303 11:15:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:38.303 00:07:38.303 real 0m0.160s 00:07:38.303 user 0m0.092s 00:07:38.303 sys 0m0.095s 00:07:38.303 11:15:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.303 11:15:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 ************************************ 00:07:38.303 END TEST version 00:07:38.303 ************************************ 00:07:38.303 11:15:33 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@202 -- # uname -s 00:07:38.303 11:15:33 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:38.303 11:15:33 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:38.303 11:15:33 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:38.303 11:15:33 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:38.303 11:15:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.303 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 11:15:33 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:38.303 11:15:33 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:38.303 11:15:33 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.303 11:15:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.303 11:15:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.303 11:15:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 ************************************ 00:07:38.303 START TEST nvmf_tcp 00:07:38.303 ************************************ 00:07:38.303 11:15:33 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.303 * Looking for test storage... 00:07:38.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:38.563 11:15:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:38.563 11:15:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.563 11:15:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:38.563 11:15:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.563 11:15:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.563 11:15:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 ************************************ 00:07:38.563 START TEST nvmf_target_core 00:07:38.563 ************************************ 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:38.563 * Looking for test storage... 00:07:38.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 ************************************ 00:07:38.563 START TEST nvmf_abort 00:07:38.563 ************************************ 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:38.563 * Looking for test storage... 00:07:38.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.563 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:38.823 11:15:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:41.356 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:41.356 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:41.356 Found net devices under 0000:84:00.0: cvl_0_0 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:41.356 Found net devices under 0000:84:00.1: cvl_0_1 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.356 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:07:41.357 00:07:41.357 --- 10.0.0.2 ping statistics --- 00:07:41.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.357 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:41.357 11:15:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:07:41.357 00:07:41.357 --- 10.0.0.1 ping statistics --- 00:07:41.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.357 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.357 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2006802 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2006802 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2006802 ']' 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.615 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.615 [2024-07-26 11:15:37.093934] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:41.615 [2024-07-26 11:15:37.094031] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.615 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.615 [2024-07-26 11:15:37.176357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.874 [2024-07-26 11:15:37.305537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.874 [2024-07-26 11:15:37.305598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.874 [2024-07-26 11:15:37.305614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.874 [2024-07-26 11:15:37.305628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.874 [2024-07-26 11:15:37.305639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.874 [2024-07-26 11:15:37.305710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.874 [2024-07-26 11:15:37.305764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.874 [2024-07-26 11:15:37.305767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.874 [2024-07-26 11:15:37.476265] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.874 Malloc0 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.874 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.875 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.875 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:41.875 Delay0 00:07:41.875 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.875 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:41.875 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.875 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:42.133 [2024-07-26 11:15:37.552647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.133 11:15:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:42.133 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.133 [2024-07-26 11:15:37.659650] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:44.696 Initializing NVMe Controllers 00:07:44.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:44.696 controller IO queue size 128 less than required 00:07:44.696 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:44.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:44.696 Initialization complete. Launching workers. 00:07:44.696 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29505 00:07:44.696 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29566, failed to submit 62 00:07:44.696 success 29509, unsuccessful 57, failed 0 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.696 rmmod nvme_tcp 00:07:44.696 rmmod nvme_fabrics 00:07:44.696 rmmod nvme_keyring 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2006802 ']' 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2006802 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2006802 ']' 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2006802 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2006802 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2006802' 00:07:44.696 killing process with pid 2006802 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2006802 00:07:44.696 11:15:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2006802 00:07:44.696 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.696 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.696 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.696 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.696 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.697 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.697 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.697 11:15:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:47.234 00:07:47.234 real 0m8.205s 00:07:47.234 user 0m11.259s 00:07:47.234 sys 0m3.214s 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:47.234 ************************************ 00:07:47.234 END TEST nvmf_abort 00:07:47.234 ************************************ 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.234 ************************************ 00:07:47.234 START TEST nvmf_ns_hotplug_stress 00:07:47.234 ************************************ 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:47.234 * Looking for test storage... 00:07:47.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.234 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:47.235 11:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.770 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:49.771 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:49.771 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:49.771 Found net devices under 0000:84:00.0: cvl_0_0 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:49.771 Found net devices under 0000:84:00.1: cvl_0_1 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:07:49.771 00:07:49.771 --- 10.0.0.2 ping statistics --- 00:07:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.771 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:49.771 00:07:49.771 --- 10.0.0.1 ping statistics --- 00:07:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.771 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2009174 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2009174 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2009174 ']' 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.771 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.772 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.772 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:49.772 [2024-07-26 11:15:45.313728] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:07:49.772 [2024-07-26 11:15:45.313824] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.772 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.772 [2024-07-26 11:15:45.390697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.030 [2024-07-26 11:15:45.513620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.030 [2024-07-26 11:15:45.513681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.031 [2024-07-26 11:15:45.513698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.031 [2024-07-26 11:15:45.513713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.031 [2024-07-26 11:15:45.513725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.031 [2024-07-26 11:15:45.513789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.031 [2024-07-26 11:15:45.513845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.031 [2024-07-26 11:15:45.513849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:50.031 11:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:50.597 [2024-07-26 11:15:46.053606] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.597 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:51.163 11:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.422 [2024-07-26 11:15:46.996566] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.422 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.989 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:52.555 Malloc0 00:07:52.555 11:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.813 Delay0 00:07:52.813 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.192 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:53.451 NULL1 00:07:53.451 11:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:53.709 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2009651 00:07:53.709 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:53.709 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:07:53.709 11:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.081 Read completed with error (sct=0, sc=11) 00:07:55.081 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.339 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:55.339 11:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:55.596 true 00:07:55.596 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:07:55.596 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 11:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.786 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:56.786 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:57.352 true 00:07:57.352 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:07:57.352 11:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.915 11:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.431 11:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:58.431 11:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:58.689 true 00:07:58.689 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:07:58.689 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.254 11:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.816 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:59.816 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:00.074 true 00:08:00.074 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:00.074 11:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.448 11:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:01.964 11:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:01.964 11:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:02.222 true 00:08:02.222 11:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:02.222 11:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.156 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.414 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:03.414 11:15:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:03.672 true 00:08:03.672 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:03.672 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.606 11:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.606 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:04.606 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:05.173 true 00:08:05.173 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:05.173 11:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.765 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.021 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:06.021 11:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:06.585 true 00:08:06.585 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:06.585 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.151 11:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.716 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:07.716 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:07.973 true 00:08:07.973 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:07.974 11:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.349 11:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.349 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.607 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:09.607 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:09.865 true 00:08:09.865 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:09.865 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.430 11:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.688 11:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:10.688 11:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:10.946 true 00:08:10.946 11:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:10.946 11:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.204 11:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.769 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:11.769 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:12.027 true 00:08:12.027 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:12.027 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.592 11:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.850 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:12.850 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:13.108 true 00:08:13.108 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:13.108 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.365 11:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.622 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:13.622 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:13.906 true 00:08:13.906 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:13.906 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.178 11:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.742 11:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:14.743 11:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:15.000 true 00:08:15.000 11:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:15.000 11:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.564 11:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.821 11:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:15.821 11:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:15.821 true 00:08:15.821 11:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:15.821 11:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.385 11:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.642 11:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:16.642 11:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:17.207 true 00:08:17.207 11:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:17.207 11:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.578 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.094 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:19.094 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:19.352 true 00:08:19.352 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:19.352 11:16:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.285 11:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.561 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.561 [2024-07-26 11:16:15.990337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.990958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.991980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.992948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.993957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.994023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.994088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.561 [2024-07-26 11:16:15.994150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.994958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.995802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.996955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.997961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.998946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:15.999954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.000947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.001016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.001082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.001149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.001229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.562 [2024-07-26 11:16:16.001306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.001835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.002994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.003990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.004935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.005989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.006949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.563 [2024-07-26 11:16:16.007479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.007948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.008979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.009909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.010958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.011959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.012981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:20.564 [2024-07-26 11:16:16.013885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.013948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.014011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:20.564 [2024-07-26 11:16:16.014075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.014144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.014206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.564 [2024-07-26 11:16:16.014275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.014958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.015978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.016846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.016918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.016987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.017989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.018975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.019954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.565 [2024-07-26 11:16:16.020966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.021961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.022939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.023944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.024952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.025424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.566 [2024-07-26 11:16:16.026753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.026817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.026881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.026948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.027990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.028945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.029945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.030956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.031796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.032996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.567 [2024-07-26 11:16:16.033063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.033985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.034938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.035986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.036994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.037985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.038976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.568 [2024-07-26 11:16:16.039859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.039925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.039992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.040956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.041021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.041088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.041961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.042953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.043979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.044963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.045960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.569 [2024-07-26 11:16:16.046749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.046817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.046885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.046956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.047938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.048533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.049925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.050966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.051946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.052982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.570 [2024-07-26 11:16:16.053875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.053941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.054951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.055486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.571 [2024-07-26 11:16:16.056408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.056995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.057969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.058983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.059945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.060001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.571 [2024-07-26 11:16:16.060062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.060948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.061955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.062942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.063953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.064953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.065992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.066987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.067051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.067113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.572 [2024-07-26 11:16:16.067183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.067999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.068943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.069864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.070853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.070927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.070992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.071938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.072962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.073994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.573 [2024-07-26 11:16:16.074713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.074779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.074845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.074912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.074977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.075998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.076982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.077447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.078957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.079985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.080976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.081981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.574 [2024-07-26 11:16:16.082047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.082943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.083974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.084569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.085972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.086981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.087948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.088012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.088074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.088136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.088201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.575 [2024-07-26 11:16:16.088264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.088993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.089975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.090975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.091759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.092995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.093944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.094989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.576 [2024-07-26 11:16:16.095579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.095644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.095708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.095771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.095831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.095894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.095957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.096930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.097952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.098855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.099991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.100964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.101947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.577 [2024-07-26 11:16:16.102909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.102969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.103989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.104960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.105983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.106055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.106120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.106926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.106991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.107948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.108966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.109949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.110012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.110081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.110147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.578 [2024-07-26 11:16:16.110210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.110984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.111948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.112972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.113945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.114936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.115946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.116957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.117013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.117075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.117134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.117204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.117269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.579 [2024-07-26 11:16:16.117338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.117983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.118993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.119059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.580 [2024-07-26 11:16:16.119126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.119995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.120052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.120113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.120188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.120252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.120315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.121943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.122961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.123940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.124968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.125946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.581 [2024-07-26 11:16:16.126635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.126700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.126768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.126832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.126894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.126955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.127675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.128935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.129938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.130977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.131951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.582 [2024-07-26 11:16:16.132913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.132980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.582 [2024-07-26 11:16:16.133609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.133676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.133744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.133813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.133869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.133940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.134719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.135993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.136946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.137967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.138943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.139785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.140941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.141005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.583 [2024-07-26 11:16:16.141068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.141905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.142984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.143980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.144938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.145989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.146976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.147993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.148052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.584 [2024-07-26 11:16:16.148116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.148948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.149867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.149931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.149992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.150940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.151999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.152946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.153936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.154002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.154236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.154306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.154372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.154447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.585 [2024-07-26 11:16:16.154516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.154989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.155971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.156045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.156111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.156186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.156865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.156930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.156992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.157970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.158996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.159956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.160979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.586 [2024-07-26 11:16:16.161693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.161760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.161826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.161906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.161983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.162985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.163961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.164987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.165969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.166979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.167998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.587 [2024-07-26 11:16:16.168672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.168736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.168801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.168868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.168939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.169938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.170004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.170068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.170140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.170213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.171973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.172964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.173949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.174968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.175935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.588 [2024-07-26 11:16:16.176000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.176955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.177946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.178975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.179513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.180960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.181991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.182997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.589 [2024-07-26 11:16:16.183070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.183942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.184945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.185865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.186968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.591 [2024-07-26 11:16:16.187681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.187746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.187813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.187869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.187935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.188982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.189997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.190999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.191943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.192967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.193685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.194585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.194652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.194711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.194774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.194836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.592 [2024-07-26 11:16:16.194902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.194966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.195993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.196987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.197997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.198955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.199994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.200971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.201036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.201100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.201860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.201932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.201995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.202061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.593 [2024-07-26 11:16:16.202131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.202939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.203955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.594 [2024-07-26 11:16:16.204731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.204791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.204856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.204926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.204989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.205993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.893 [2024-07-26 11:16:16.206768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.206977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.893 [2024-07-26 11:16:16.207456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.207974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.208953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.209954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.210973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.211983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.212856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.213977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.894 [2024-07-26 11:16:16.214435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.214942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.215005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.215070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.215133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.215197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.215259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.216978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.217947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.218980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.219954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.220942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.895 [2024-07-26 11:16:16.221878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.221943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.222462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.223961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.224973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.225989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.226969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.227941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.896 [2024-07-26 11:16:16.228951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.229687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.230938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.231987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.232989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.233940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.234976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.897 [2024-07-26 11:16:16.235974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.236456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.237951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.238977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.239963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.240942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.898 [2024-07-26 11:16:16.241392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.241620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.241682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.241748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.241811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.241874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.241938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.242995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.243968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.244961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.245721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.246664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.246747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.246813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.246880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.246944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.247958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.899 [2024-07-26 11:16:16.248748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.248804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.248871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.248936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.248998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.249952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.250825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.251952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.252947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.253931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.254998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.900 [2024-07-26 11:16:16.255758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.255821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.255892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.255957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.256986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.257942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.258995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.259972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.260873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.260942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.261982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.262950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.263010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.901 [2024-07-26 11:16:16.263073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.263935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.264952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.265969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.266977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.267048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.267119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.267188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.267253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.267330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.267399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.268984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.269050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.269110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.269177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.269241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.902 [2024-07-26 11:16:16.269302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.269987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.270977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.271973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.272996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.273956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.274941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.275949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.276015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.276082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.276147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.276217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.903 [2024-07-26 11:16:16.276283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.276953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.277972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.278971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.904 [2024-07-26 11:16:16.279818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.279950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.280991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.281055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.281119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.281182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.281246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.281311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.281372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.282941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.904 [2024-07-26 11:16:16.283777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.283842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.283907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.283975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.284953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.285991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.286949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.287980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.288629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.289987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.905 [2024-07-26 11:16:16.290815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.290881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.290945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.291954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.292935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.293962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.294991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.295814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.296970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.297937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.906 [2024-07-26 11:16:16.298007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.298944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.299968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.300941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.301992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.302805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.303934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.304000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.304067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.304131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.304204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.304268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.907 [2024-07-26 11:16:16.304331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.304957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.305975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.306959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.307962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.308994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.309942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.310005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.310074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.310138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.310208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.908 [2024-07-26 11:16:16.311805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.311873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.311939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.312981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.313961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.314981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.315944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.316929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.317000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.317067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.317129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.317195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.317260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.317322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.318974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.909 [2024-07-26 11:16:16.319042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.319982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.320943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.321950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.322938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.323999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.324964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.910 [2024-07-26 11:16:16.325717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.325786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.325850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.325915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.325986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.326584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.327671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.327738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.327807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.327874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.327939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.328994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.329976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.330955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.331888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.332979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.333048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.333113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.333177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.333244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.333313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.911 [2024-07-26 11:16:16.333395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.333978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.334953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.335960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.336029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.336091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.336156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.336221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.336288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.337954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.338941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.912 [2024-07-26 11:16:16.339835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.339900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.339972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.340944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.341948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.342742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.343982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.344934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.345944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.913 [2024-07-26 11:16:16.346969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.347965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.348966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.349952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.350724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.351940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.352981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.353944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.354005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.354073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.354145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.354216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.914 [2024-07-26 11:16:16.354282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.354960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.355959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.915 [2024-07-26 11:16:16.356382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.356975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.357946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.358012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.358076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.358140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.358909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.358979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.359997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.360965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.915 [2024-07-26 11:16:16.361633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.361698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.361758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.361822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.361890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.361956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.362983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.363984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.364969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.365939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.366004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.366066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.366130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.366195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.366256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.366316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.367938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.368001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.916 [2024-07-26 11:16:16.368062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.368962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.369994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.370955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.371957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.372998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.373722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.374956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.375024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.375093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.375175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.375250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.917 [2024-07-26 11:16:16.375313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.375987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.376957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.377966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.378964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.379958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.380927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.918 [2024-07-26 11:16:16.381892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.382945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.383958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.384986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.385974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.386944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.387947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.388020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.388837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.388913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.388979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.919 [2024-07-26 11:16:16.389807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.389880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.389947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.390967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.391961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.392984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.393965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.394987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.395973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.920 [2024-07-26 11:16:16.396420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.396998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.397065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.397126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.397189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.397255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.397327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.398994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.399950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.400992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.401975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.402960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.921 [2024-07-26 11:16:16.403027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.403748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.404967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.405989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.406959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.407994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.408989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.409957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.410023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.410088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.410154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.922 [2024-07-26 11:16:16.410217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.410988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.411794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.412954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.413960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.414936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.415964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.416938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.923 [2024-07-26 11:16:16.417379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.417951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.418845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.418916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.418983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.419944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.420955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.421984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.422928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.423987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.924 [2024-07-26 11:16:16.424917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.424981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.425958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.426998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.427973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.428982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.429950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.430958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.431027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.925 [2024-07-26 11:16:16.431097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.926 [2024-07-26 11:16:16.431683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.431948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.432403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.433993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.434942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.435966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.436940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.437971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.926 [2024-07-26 11:16:16.438511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.438988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.439957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.440617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.441990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.442937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.443950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.444941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.927 [2024-07-26 11:16:16.445936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.446958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.447944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.448969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.449981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.450970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.451984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.452963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.453030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.928 [2024-07-26 11:16:16.453095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.453962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.454958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.455016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.455723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.455794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.455860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.455934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.455999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.456959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.457962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.458951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.459953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.460189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.460262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.460328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.460396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.929 [2024-07-26 11:16:16.460470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.460977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.461044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.461109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.461176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.461234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.462935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.463969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.464965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.465943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.466991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.467058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.467126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.467190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.930 [2024-07-26 11:16:16.467253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.467986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.468957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.469937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.470824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.471731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.471804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.471871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.471943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.472980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.473972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.931 [2024-07-26 11:16:16.474518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.474998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.475907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.476958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.477994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.478997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.479941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.480963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.932 [2024-07-26 11:16:16.481499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.481564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.481630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.481696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.481771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.482976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.483985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.484952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.485017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.485082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.485985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.486945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.487989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.933 [2024-07-26 11:16:16.488951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.489976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.490955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.491996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.492513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.493949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.494984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.495041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.495100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.495163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.495226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.495296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.934 [2024-07-26 11:16:16.495358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.495948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.496988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.497967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.498977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.499447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.500960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.501936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.502947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.503010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.503074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.503138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.503200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.935 [2024-07-26 11:16:16.503265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.503979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.504994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:20.936 [2024-07-26 11:16:16.505470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.505999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.506936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.507984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.508958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.509977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.936 [2024-07-26 11:16:16.510782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.510845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.510916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.510979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.511949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.512976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:20.937 [2024-07-26 11:16:16.513459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.513527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.513601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.513669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.513734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.513810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.513885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.514756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.514825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.514899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.514963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.515938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.516948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.517013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.517085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.517144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.517203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.210 [2024-07-26 11:16:16.517270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.517978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.518904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.519928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.520955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.521984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.522931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.523004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.523067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.523133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.523200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.523265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.523331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.211 [2024-07-26 11:16:16.524730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.524794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.524862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.524919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.524980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.525953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.526970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.527971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.528938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.529754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.530945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.212 [2024-07-26 11:16:16.531820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.531888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.531944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.532972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.533996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.534968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.535961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.536956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.537712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.538611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.538687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.538756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.538819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.538883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.538949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.539016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.539081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.539148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.539219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.539276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.213 [2024-07-26 11:16:16.539343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.539933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.540999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.541991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.542780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.543968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.544975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.545038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.545101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.545165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.545221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.545921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.214 [2024-07-26 11:16:16.546521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.546992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.547960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.548958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.549945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.550967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.215 [2024-07-26 11:16:16.551461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.551943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.552945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.553938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.554988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.555947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.556948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.216 [2024-07-26 11:16:16.557644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.557706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.557773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.557840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.557904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.557966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.558972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.559413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.560987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.561949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.562960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.563989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.217 [2024-07-26 11:16:16.564971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.565949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.566545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.567931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.568963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.569947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.218 [2024-07-26 11:16:16.570842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.570914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.570982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.571980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.572978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.573789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.574693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.574761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.574823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.574908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.574975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.575953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.576993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.577942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.219 [2024-07-26 11:16:16.578009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.578887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.579956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.580947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.581972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.582039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.582109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:21.220 [2024-07-26 11:16:16.582820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.582890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.582954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.220 [2024-07-26 11:16:16.583402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.583974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.584939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.585962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.586943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.587990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.588052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.588123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.588191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.588255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 true 00:08:21.221 [2024-07-26 11:16:16.589706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.221 [2024-07-26 11:16:16.589969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.590960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.591933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.592982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.593970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.594936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.595974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.222 [2024-07-26 11:16:16.596632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.596697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.596758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.596822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.596890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.596956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.597722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.598635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.598703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.598768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.598829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.598903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.598966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.599973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.600961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.601937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.223 [2024-07-26 11:16:16.602405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.602482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.602552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.602616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.602680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.602742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.602969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.603974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.604044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.604690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.604776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.604841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.604907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.604973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.605981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.606944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:21.224 [2024-07-26 11:16:16.607807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.607945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 11:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.224 [2024-07-26 11:16:16.608011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.608871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.224 [2024-07-26 11:16:16.609571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.609632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.609696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.609756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.609822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.609887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.609969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.610972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.611994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.612058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.612126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.612954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.613952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.614938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.225 [2024-07-26 11:16:16.615579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.615648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.615719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.615787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.615852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.615917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.615982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.616945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.617996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.618949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.619695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.620942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.621950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.226 [2024-07-26 11:16:16.622894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.622963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.623986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.624941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.625956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.626794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.627962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.628935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.227 [2024-07-26 11:16:16.629966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.630965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.631955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.632997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.633864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.634734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.634798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.634867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.634930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.634999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.635978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.636971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.637038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.637110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.637173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.637238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.228 [2024-07-26 11:16:16.637314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.637991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.638958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.639984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.640998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.641943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.229 [2024-07-26 11:16:16.642890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.642956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.643022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.643089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.643150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.643216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.643280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.644989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.645948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.646999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.647967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.648981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.649948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.230 [2024-07-26 11:16:16.650472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.650990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.651976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.652921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.653862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.653934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.654970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.655958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.656956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.231 [2024-07-26 11:16:16.657719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.657786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.657849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.657914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.657978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:21.232 [2024-07-26 11:16:16.658761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.658966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.659961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.660985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.661982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.662999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.663951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.664981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.232 [2024-07-26 11:16:16.665046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.665942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.666937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.667937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.668933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.669999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.670946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.233 [2024-07-26 11:16:16.671968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.672963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.673029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.673096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.673163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.673228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.673298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.674980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.675954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.676967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.677946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.234 [2024-07-26 11:16:16.678578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.678642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.678704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.678765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.678829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.678890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.678957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.679989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.680945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.681991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.682624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.683990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.684939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.235 [2024-07-26 11:16:16.685582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.685653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.685716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.685780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.685843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.685899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.685965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.686947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.687422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.688974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.689991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.690981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.691987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.692949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.236 [2024-07-26 11:16:16.693018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.693785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.694945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.695972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.696980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.697978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.698976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.699949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.700011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.237 [2024-07-26 11:16:16.700072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.700960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.701860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.702954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.703015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.703089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.703877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.703945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.704011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.704074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.704137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.704208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:21.238 [2024-07-26 11:16:16.704262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:22.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.171 11:16:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.429 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:22.429 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:22.687 true 00:08:22.687 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:22.687 11:16:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.599 Initializing NVMe Controllers 00:08:24.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:24.599 Controller IO queue size 128, less than required. 00:08:24.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:24.599 Controller IO queue size 128, less than required. 00:08:24.599 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:24.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:24.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:24.599 Initialization complete. Launching workers. 00:08:24.599 ======================================================== 00:08:24.599 Latency(us) 00:08:24.599 Device Information : IOPS MiB/s Average min max 00:08:24.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3599.28 1.76 22827.13 2772.52 1015309.38 00:08:24.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11877.01 5.80 10777.31 2456.46 505818.28 00:08:24.599 ======================================================== 00:08:24.599 Total : 15476.29 7.56 13579.71 2456.46 1015309.38 00:08:24.599 00:08:24.599 11:16:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.599 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:24.599 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:25.533 true 00:08:25.533 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2009651 00:08:25.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2009651) - No such process 00:08:25.533 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2009651 00:08:25.533 11:16:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.792 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.050 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:26.050 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:26.050 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:26.050 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.050 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:26.308 null0 00:08:26.566 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.566 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.566 11:16:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:26.824 null1 00:08:26.824 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:26.824 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.824 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:27.082 null2 00:08:27.082 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.082 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.082 11:16:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:27.649 null3 00:08:27.649 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.649 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.649 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:27.910 null4 00:08:27.910 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.910 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.910 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:28.167 null5 00:08:28.167 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.167 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.167 11:16:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:28.735 null6 00:08:28.992 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.992 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.992 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:29.251 null7 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:29.251 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2013905 2013906 2013908 2013910 2013912 2013914 2013916 2013918 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.252 11:16:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.510 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.768 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.027 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.285 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.545 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.545 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.804 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.804 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.804 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.804 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.804 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.804 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.062 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.320 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.320 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.320 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.320 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.321 11:16:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.579 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.838 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.096 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.355 11:16:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.355 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.614 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.872 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.130 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.130 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.130 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.130 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.130 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.390 11:16:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.390 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.390 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.390 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.649 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.908 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.908 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.909 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.167 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.168 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.426 11:16:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.684 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.684 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.684 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.684 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.684 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.684 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.685 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.944 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.203 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.203 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.203 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.203 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.203 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.203 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.461 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.462 11:16:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.462 rmmod nvme_tcp 00:08:35.462 rmmod nvme_fabrics 00:08:35.462 rmmod nvme_keyring 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2009174 ']' 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2009174 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2009174 ']' 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2009174 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2009174 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2009174' 00:08:35.462 killing process with pid 2009174 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2009174 00:08:35.462 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2009174 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.028 11:16:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.931 00:08:37.931 real 0m51.068s 00:08:37.931 user 3m51.959s 00:08:37.931 sys 0m18.086s 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.931 ************************************ 00:08:37.931 END TEST nvmf_ns_hotplug_stress 00:08:37.931 ************************************ 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.931 ************************************ 00:08:37.931 START TEST nvmf_delete_subsystem 00:08:37.931 ************************************ 00:08:37.931 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:37.931 * Looking for test storage... 00:08:38.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.190 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.191 11:16:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:40.748 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:40.748 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.748 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:40.749 Found net devices under 0000:84:00.0: cvl_0_0 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:40.749 Found net devices under 0000:84:00.1: cvl_0_1 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.749 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:08:41.008 00:08:41.008 --- 10.0.0.2 ping statistics --- 00:08:41.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.008 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:41.008 00:08:41.008 --- 10.0.0.1 ping statistics --- 00:08:41.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.008 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2016824 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2016824 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2016824 ']' 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.008 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.008 [2024-07-26 11:16:36.541111] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:08:41.008 [2024-07-26 11:16:36.541217] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.008 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.008 [2024-07-26 11:16:36.623637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.266 [2024-07-26 11:16:36.744529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.266 [2024-07-26 11:16:36.744596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.266 [2024-07-26 11:16:36.744613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.266 [2024-07-26 11:16:36.744627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.266 [2024-07-26 11:16:36.744639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.266 [2024-07-26 11:16:36.744720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.266 [2024-07-26 11:16:36.744727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.266 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.267 [2024-07-26 11:16:36.901158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.267 [2024-07-26 11:16:36.917516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.267 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.525 NULL1 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.525 Delay0 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2016845 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:41.525 11:16:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:41.525 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.525 [2024-07-26 11:16:37.032250] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:43.424 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.424 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.424 11:16:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 [2024-07-26 11:16:39.293794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6780000c00 is same with the state(5) to be set 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.682 Write completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 Read completed with error (sct=0, sc=8) 00:08:43.682 starting I/O failed: -6 00:08:43.683 [2024-07-26 11:16:39.294724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017c20 is same with the state(5) to be set 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Write completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 [2024-07-26 11:16:39.295243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f678000d490 is same with the state(5) to be set 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:43.683 Read completed with error (sct=0, sc=8) 00:08:44.617 [2024-07-26 11:16:40.253160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2018ac0 is same with the state(5) to be set 00:08:44.875 Write completed with error (sct=0, sc=8) 00:08:44.875 Read completed with error (sct=0, sc=8) 00:08:44.875 Read completed with error (sct=0, sc=8) 00:08:44.875 Read completed with error (sct=0, sc=8) 00:08:44.875 Read completed with error (sct=0, sc=8) 00:08:44.875 Write completed with error (sct=0, sc=8) 00:08:44.875 Read completed with error (sct=0, sc=8) 00:08:44.875 Read completed with error (sct=0, sc=8) 00:08:44.875 Write completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 [2024-07-26 11:16:40.295982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20178f0 is same with the state(5) to be set 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 [2024-07-26 11:16:40.297395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f678000d000 is same with the state(5) to be set 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 [2024-07-26 11:16:40.298730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f678000d7c0 is same with the state(5) to be set 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Read completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 Write completed with error (sct=0, sc=8) 00:08:44.876 [2024-07-26 11:16:40.299623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20173e0 is same with the state(5) to be set 00:08:44.876 Initializing NVMe Controllers 00:08:44.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:44.876 Controller IO queue size 128, less than required. 00:08:44.876 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:44.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:44.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:44.876 Initialization complete. Launching workers. 00:08:44.876 ======================================================== 00:08:44.876 Latency(us) 00:08:44.876 Device Information : IOPS MiB/s Average min max 00:08:44.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.26 0.08 912953.13 577.38 1043783.36 00:08:44.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.24 0.08 904343.80 665.32 1012936.68 00:08:44.876 ======================================================== 00:08:44.876 Total : 329.50 0.16 908609.57 577.38 1043783.36 00:08:44.876 00:08:44.876 [2024-07-26 11:16:40.300167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2018ac0 (9): Bad file descriptor 00:08:44.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:44.876 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.876 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:44.876 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2016845 00:08:44.876 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:45.441 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:45.441 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2016845 00:08:45.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2016845) - No such process 00:08:45.441 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2016845 00:08:45.441 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:45.441 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2016845 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2016845 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.442 [2024-07-26 11:16:40.822035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2017368 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:45.442 11:16:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.442 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.442 [2024-07-26 11:16:40.905873] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:45.701 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.701 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:45.701 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.267 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.267 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:46.267 11:16:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.832 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.832 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:46.832 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.394 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.394 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:47.394 11:16:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:47.957 11:16:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.957 11:16:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:47.957 11:16:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.262 11:16:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.262 11:16:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:48.262 11:16:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:48.538 Initializing NVMe Controllers 00:08:48.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:48.538 Controller IO queue size 128, less than required. 00:08:48.538 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:48.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:48.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:48.538 Initialization complete. Launching workers. 00:08:48.538 ======================================================== 00:08:48.538 Latency(us) 00:08:48.538 Device Information : IOPS MiB/s Average min max 00:08:48.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005081.73 1000227.95 1041854.13 00:08:48.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004916.43 1000238.49 1041751.79 00:08:48.538 ======================================================== 00:08:48.538 Total : 256.00 0.12 1004999.08 1000227.95 1041854.13 00:08:48.538 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2017368 00:08:48.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2017368) - No such process 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2017368 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.795 rmmod nvme_tcp 00:08:48.795 rmmod nvme_fabrics 00:08:48.795 rmmod nvme_keyring 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2016824 ']' 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2016824 00:08:48.795 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2016824 ']' 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2016824 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2016824 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2016824' 00:08:48.796 killing process with pid 2016824 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2016824 00:08:48.796 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2016824 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.362 11:16:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:51.330 00:08:51.330 real 0m13.250s 00:08:51.330 user 0m28.577s 00:08:51.330 sys 0m3.560s 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.330 ************************************ 00:08:51.330 END TEST nvmf_delete_subsystem 00:08:51.330 ************************************ 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.330 ************************************ 00:08:51.330 START TEST nvmf_host_management 00:08:51.330 ************************************ 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.330 * Looking for test storage... 00:08:51.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.330 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.331 11:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:54.617 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:54.617 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:54.617 Found net devices under 0000:84:00.0: cvl_0_0 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:54.617 Found net devices under 0000:84:00.1: cvl_0_1 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.617 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:54.618 00:08:54.618 --- 10.0.0.2 ping statistics --- 00:08:54.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.618 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:08:54.618 00:08:54.618 --- 10.0.0.1 ping statistics --- 00:08:54.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.618 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2019738 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2019738 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2019738 ']' 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.618 11:16:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 [2024-07-26 11:16:49.829144] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:08:54.618 [2024-07-26 11:16:49.829245] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.618 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.618 [2024-07-26 11:16:49.915303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.618 [2024-07-26 11:16:50.059035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.618 [2024-07-26 11:16:50.059108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.618 [2024-07-26 11:16:50.059129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.618 [2024-07-26 11:16:50.059146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.618 [2024-07-26 11:16:50.059161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.618 [2024-07-26 11:16:50.059234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.618 [2024-07-26 11:16:50.059292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.618 [2024-07-26 11:16:50.059349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.618 [2024-07-26 11:16:50.059352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 [2024-07-26 11:16:50.973495] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.553 11:16:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 Malloc0 00:08:55.553 [2024-07-26 11:16:51.038235] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2019980 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2019980 /var/tmp/bdevperf.sock 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2019980 ']' 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.553 { 00:08:55.553 "params": { 00:08:55.553 "name": "Nvme$subsystem", 00:08:55.553 "trtype": "$TEST_TRANSPORT", 00:08:55.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.553 "adrfam": "ipv4", 00:08:55.553 "trsvcid": "$NVMF_PORT", 00:08:55.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.553 "hdgst": ${hdgst:-false}, 00:08:55.553 "ddgst": ${ddgst:-false} 00:08:55.553 }, 00:08:55.553 "method": "bdev_nvme_attach_controller" 00:08:55.553 } 00:08:55.553 EOF 00:08:55.553 )") 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.553 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:55.554 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:55.554 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:55.554 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.554 "params": { 00:08:55.554 "name": "Nvme0", 00:08:55.554 "trtype": "tcp", 00:08:55.554 "traddr": "10.0.0.2", 00:08:55.554 "adrfam": "ipv4", 00:08:55.554 "trsvcid": "4420", 00:08:55.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.554 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:55.554 "hdgst": false, 00:08:55.554 "ddgst": false 00:08:55.554 }, 00:08:55.554 "method": "bdev_nvme_attach_controller" 00:08:55.554 }' 00:08:55.554 [2024-07-26 11:16:51.125248] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:08:55.554 [2024-07-26 11:16:51.125340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019980 ] 00:08:55.554 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.554 [2024-07-26 11:16:51.195274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.812 [2024-07-26 11:16:51.321806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.070 Running I/O for 10 seconds... 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:56.070 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:56.071 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.331 [2024-07-26 11:16:51.929198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc3c0 is same with the state(5) to be set 00:08:56.331 [2024-07-26 11:16:51.929297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc3c0 is same with the state(5) to be set 00:08:56.331 [2024-07-26 11:16:51.929314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc3c0 is same with the state(5) to be set 00:08:56.331 [2024-07-26 11:16:51.929352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc3c0 is same with the state(5) to be set 00:08:56.331 [2024-07-26 11:16:51.929367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc3c0 is same with the state(5) to be set 00:08:56.331 [2024-07-26 11:16:51.929379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc3c0 is same with the state(5) to be set 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.331 11:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:56.331 [2024-07-26 11:16:51.943533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:56.331 [2024-07-26 11:16:51.943580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:56.331 [2024-07-26 11:16:51.943626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:56.331 [2024-07-26 11:16:51.943659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:56.331 [2024-07-26 11:16:51.943688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234a540 is same with the state(5) to be set 00:08:56.331 [2024-07-26 11:16:51.943764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.331 [2024-07-26 11:16:51.943787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.331 [2024-07-26 11:16:51.943833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.331 [2024-07-26 11:16:51.943866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.331 [2024-07-26 11:16:51.943898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.331 [2024-07-26 11:16:51.943915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.331 [2024-07-26 11:16:51.943937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.943955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.943970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.943987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.944982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.944997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.945013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.945028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.945045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.945060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.945076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.945091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.945108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.945123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.332 [2024-07-26 11:16:51.945140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.332 [2024-07-26 11:16:51.945155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.945892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.333 [2024-07-26 11:16:51.945908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:56.333 [2024-07-26 11:16:51.946001] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x275ad70 was disconnected and freed. reset controller. 00:08:56.333 [2024-07-26 11:16:51.947245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:56.333 task offset: 73728 on job bdev=Nvme0n1 fails 00:08:56.333 00:08:56.333 Latency(us) 00:08:56.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.333 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:56.333 Job: Nvme0n1 ended in about 0.43 seconds with error 00:08:56.333 Verification LBA range: start 0x0 length 0x400 00:08:56.333 Nvme0n1 : 0.43 1335.52 83.47 148.39 0.00 41880.01 3131.16 38253.61 00:08:56.333 =================================================================================================================== 00:08:56.333 Total : 1335.52 83.47 148.39 0.00 41880.01 3131.16 38253.61 00:08:56.333 [2024-07-26 11:16:51.949302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.333 [2024-07-26 11:16:51.949335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234a540 (9): Bad file descriptor 00:08:56.333 [2024-07-26 11:16:51.954734] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2019980 00:08:57.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2019980) - No such process 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:57.717 { 00:08:57.717 "params": { 00:08:57.717 "name": "Nvme$subsystem", 00:08:57.717 "trtype": "$TEST_TRANSPORT", 00:08:57.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.717 "adrfam": "ipv4", 00:08:57.717 "trsvcid": "$NVMF_PORT", 00:08:57.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.717 "hdgst": ${hdgst:-false}, 00:08:57.717 "ddgst": ${ddgst:-false} 00:08:57.717 }, 00:08:57.717 "method": "bdev_nvme_attach_controller" 00:08:57.717 } 00:08:57.717 EOF 00:08:57.717 )") 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:57.717 11:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:57.717 "params": { 00:08:57.717 "name": "Nvme0", 00:08:57.717 "trtype": "tcp", 00:08:57.717 "traddr": "10.0.0.2", 00:08:57.717 "adrfam": "ipv4", 00:08:57.717 "trsvcid": "4420", 00:08:57.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:57.717 "hdgst": false, 00:08:57.717 "ddgst": false 00:08:57.717 }, 00:08:57.717 "method": "bdev_nvme_attach_controller" 00:08:57.717 }' 00:08:57.717 [2024-07-26 11:16:53.019994] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:08:57.717 [2024-07-26 11:16:53.020103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020186 ] 00:08:57.717 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.717 [2024-07-26 11:16:53.089994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.717 [2024-07-26 11:16:53.212886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.977 Running I/O for 1 seconds... 00:08:59.353 00:08:59.353 Latency(us) 00:08:59.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.353 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:59.353 Verification LBA range: start 0x0 length 0x400 00:08:59.353 Nvme0n1 : 1.07 1317.26 82.33 0.00 0.00 46056.52 12427.57 53982.25 00:08:59.353 =================================================================================================================== 00:08:59.353 Total : 1317.26 82.33 0.00 0.00 46056.52 12427.57 53982.25 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.353 rmmod nvme_tcp 00:08:59.353 rmmod nvme_fabrics 00:08:59.353 rmmod nvme_keyring 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2019738 ']' 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2019738 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2019738 ']' 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2019738 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.353 11:16:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2019738 00:08:59.612 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:59.612 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:59.612 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2019738' 00:08:59.612 killing process with pid 2019738 00:08:59.612 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2019738 00:08:59.612 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2019738 00:08:59.871 [2024-07-26 11:16:55.363109] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.871 11:16:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:02.405 00:09:02.405 real 0m10.605s 00:09:02.405 user 0m24.923s 00:09:02.405 sys 0m3.456s 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.405 ************************************ 00:09:02.405 END TEST nvmf_host_management 00:09:02.405 ************************************ 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.405 ************************************ 00:09:02.405 START TEST nvmf_lvol 00:09:02.405 ************************************ 00:09:02.405 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:02.405 * Looking for test storage... 00:09:02.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.406 11:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.942 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:04.943 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:04.943 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:04.943 Found net devices under 0000:84:00.0: cvl_0_0 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:04.943 Found net devices under 0000:84:00.1: cvl_0_1 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:09:04.943 00:09:04.943 --- 10.0.0.2 ping statistics --- 00:09:04.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.943 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:04.943 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:09:04.943 00:09:04.943 --- 10.0.0.1 ping statistics --- 00:09:04.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.943 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2022538 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2022538 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2022538 ']' 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.944 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.944 [2024-07-26 11:17:00.515328] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:09:04.944 [2024-07-26 11:17:00.515424] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.944 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.944 [2024-07-26 11:17:00.594609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.201 [2024-07-26 11:17:00.719635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.201 [2024-07-26 11:17:00.719701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.201 [2024-07-26 11:17:00.719717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.201 [2024-07-26 11:17:00.719731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.201 [2024-07-26 11:17:00.719743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.201 [2024-07-26 11:17:00.719812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.201 [2024-07-26 11:17:00.719889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.201 [2024-07-26 11:17:00.719883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.201 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.201 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:05.201 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.201 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.201 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:05.459 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.459 11:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.717 [2024-07-26 11:17:01.151195] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.717 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.975 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:05.976 11:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.541 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:06.541 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:07.107 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:07.365 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fa3d1a16-dc29-40d2-a889-5334a91060b6 00:09:07.365 11:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa3d1a16-dc29-40d2-a889-5334a91060b6 lvol 20 00:09:07.623 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b186ebc3-22bc-4e5a-bb1a-2578a9c4500e 00:09:07.623 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:08.192 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b186ebc3-22bc-4e5a-bb1a-2578a9c4500e 00:09:08.449 11:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:09.014 [2024-07-26 11:17:04.403456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.014 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.271 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2023093 00:09:09.271 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:09.271 11:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:09.528 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.465 11:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b186ebc3-22bc-4e5a-bb1a-2578a9c4500e MY_SNAPSHOT 00:09:10.724 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8822271f-b616-4c4d-9112-9e18c85e6b5c 00:09:10.724 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b186ebc3-22bc-4e5a-bb1a-2578a9c4500e 30 00:09:10.983 11:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8822271f-b616-4c4d-9112-9e18c85e6b5c MY_CLONE 00:09:11.549 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8cc1693f-5859-4e1d-93ef-8565a2266a4d 00:09:11.549 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8cc1693f-5859-4e1d-93ef-8565a2266a4d 00:09:12.181 11:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2023093 00:09:20.295 Initializing NVMe Controllers 00:09:20.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:20.295 Controller IO queue size 128, less than required. 00:09:20.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:20.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:20.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:20.295 Initialization complete. Launching workers. 00:09:20.295 ======================================================== 00:09:20.295 Latency(us) 00:09:20.295 Device Information : IOPS MiB/s Average min max 00:09:20.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9603.00 37.51 13335.85 2180.75 132634.47 00:09:20.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9558.60 37.34 13395.67 2302.17 52718.42 00:09:20.295 ======================================================== 00:09:20.295 Total : 19161.60 74.85 13365.69 2180.75 132634.47 00:09:20.295 00:09:20.295 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:20.295 11:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b186ebc3-22bc-4e5a-bb1a-2578a9c4500e 00:09:20.552 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa3d1a16-dc29-40d2-a889-5334a91060b6 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.810 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.810 rmmod nvme_tcp 00:09:20.810 rmmod nvme_fabrics 00:09:21.067 rmmod nvme_keyring 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2022538 ']' 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2022538 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2022538 ']' 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2022538 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2022538 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2022538' 00:09:21.067 killing process with pid 2022538 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2022538 00:09:21.067 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2022538 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.326 11:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:23.860 00:09:23.860 real 0m21.432s 00:09:23.860 user 1m11.777s 00:09:23.860 sys 0m6.364s 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.860 ************************************ 00:09:23.860 END TEST nvmf_lvol 00:09:23.860 ************************************ 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.860 11:17:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.860 ************************************ 00:09:23.860 START TEST nvmf_lvs_grow 00:09:23.861 ************************************ 00:09:23.861 11:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:23.861 * Looking for test storage... 00:09:23.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:23.861 11:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:26.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:26.394 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:26.394 Found net devices under 0000:84:00.0: cvl_0_0 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.394 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:26.395 Found net devices under 0000:84:00.1: cvl_0_1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:09:26.395 00:09:26.395 --- 10.0.0.2 ping statistics --- 00:09:26.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.395 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:09:26.395 00:09:26.395 --- 10.0.0.1 ping statistics --- 00:09:26.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.395 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2026398 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2026398 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2026398 ']' 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.395 11:17:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.395 [2024-07-26 11:17:22.030976] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:09:26.395 [2024-07-26 11:17:22.031080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.653 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.653 [2024-07-26 11:17:22.115718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.653 [2024-07-26 11:17:22.238997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.653 [2024-07-26 11:17:22.239066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.653 [2024-07-26 11:17:22.239083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.653 [2024-07-26 11:17:22.239096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.653 [2024-07-26 11:17:22.239108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.653 [2024-07-26 11:17:22.239149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.911 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:27.169 [2024-07-26 11:17:22.733556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.169 ************************************ 00:09:27.169 START TEST lvs_grow_clean 00:09:27.169 ************************************ 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:27.169 11:17:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.735 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:27.736 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:27.993 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:27.993 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:27.993 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:28.250 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:28.250 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:28.250 11:17:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 lvol 150 00:09:28.507 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d93182a5-b3b4-491e-985c-855431b0e656 00:09:28.507 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.507 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:29.072 [2024-07-26 11:17:24.444115] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:29.072 [2024-07-26 11:17:24.444205] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:29.072 true 00:09:29.072 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:29.072 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:29.072 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:29.072 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:29.637 11:17:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d93182a5-b3b4-491e-985c-855431b0e656 00:09:29.638 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:30.202 [2024-07-26 11:17:25.567541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.202 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2026958 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2026958 /var/tmp/bdevperf.sock 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2026958 ']' 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:30.460 11:17:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.460 [2024-07-26 11:17:25.919495] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:09:30.460 [2024-07-26 11:17:25.919587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026958 ] 00:09:30.460 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.460 [2024-07-26 11:17:25.987209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.460 [2024-07-26 11:17:26.110924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.027 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.027 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:31.027 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:31.621 Nvme0n1 00:09:31.621 11:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:31.621 [ 00:09:31.621 { 00:09:31.621 "name": "Nvme0n1", 00:09:31.621 "aliases": [ 00:09:31.621 "d93182a5-b3b4-491e-985c-855431b0e656" 00:09:31.621 ], 00:09:31.621 "product_name": "NVMe disk", 00:09:31.621 "block_size": 4096, 00:09:31.621 "num_blocks": 38912, 00:09:31.622 "uuid": "d93182a5-b3b4-491e-985c-855431b0e656", 00:09:31.622 "assigned_rate_limits": { 00:09:31.622 "rw_ios_per_sec": 0, 00:09:31.622 "rw_mbytes_per_sec": 0, 00:09:31.622 "r_mbytes_per_sec": 0, 00:09:31.622 "w_mbytes_per_sec": 0 00:09:31.622 }, 00:09:31.622 "claimed": false, 00:09:31.622 "zoned": false, 00:09:31.622 "supported_io_types": { 00:09:31.622 "read": true, 00:09:31.622 "write": true, 00:09:31.622 "unmap": true, 00:09:31.622 "flush": true, 00:09:31.622 "reset": true, 00:09:31.622 "nvme_admin": true, 00:09:31.622 "nvme_io": true, 00:09:31.622 "nvme_io_md": false, 00:09:31.622 "write_zeroes": true, 00:09:31.622 "zcopy": false, 00:09:31.622 "get_zone_info": false, 00:09:31.622 "zone_management": false, 00:09:31.622 "zone_append": false, 00:09:31.622 "compare": true, 00:09:31.622 "compare_and_write": true, 00:09:31.622 "abort": true, 00:09:31.622 "seek_hole": false, 00:09:31.622 "seek_data": false, 00:09:31.622 "copy": true, 00:09:31.622 "nvme_iov_md": false 00:09:31.622 }, 00:09:31.622 "memory_domains": [ 00:09:31.622 { 00:09:31.622 "dma_device_id": "system", 00:09:31.622 "dma_device_type": 1 00:09:31.622 } 00:09:31.622 ], 00:09:31.622 "driver_specific": { 00:09:31.622 "nvme": [ 00:09:31.622 { 00:09:31.622 "trid": { 00:09:31.622 "trtype": "TCP", 00:09:31.622 "adrfam": "IPv4", 00:09:31.622 "traddr": "10.0.0.2", 00:09:31.622 "trsvcid": "4420", 00:09:31.622 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:31.622 }, 00:09:31.622 "ctrlr_data": { 00:09:31.622 "cntlid": 1, 00:09:31.622 "vendor_id": "0x8086", 00:09:31.622 "model_number": "SPDK bdev Controller", 00:09:31.622 "serial_number": "SPDK0", 00:09:31.622 "firmware_revision": "24.09", 00:09:31.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.622 "oacs": { 00:09:31.622 "security": 0, 00:09:31.622 "format": 0, 00:09:31.622 "firmware": 0, 00:09:31.622 "ns_manage": 0 00:09:31.622 }, 00:09:31.622 "multi_ctrlr": true, 00:09:31.622 "ana_reporting": false 00:09:31.622 }, 00:09:31.622 "vs": { 00:09:31.622 "nvme_version": "1.3" 00:09:31.622 }, 00:09:31.622 "ns_data": { 00:09:31.622 "id": 1, 00:09:31.622 "can_share": true 00:09:31.622 } 00:09:31.622 } 00:09:31.622 ], 00:09:31.622 "mp_policy": "active_passive" 00:09:31.622 } 00:09:31.622 } 00:09:31.622 ] 00:09:31.622 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2027100 00:09:31.622 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.622 11:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:31.880 Running I/O for 10 seconds... 00:09:32.815 Latency(us) 00:09:32.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.815 Nvme0n1 : 1.00 13975.00 54.59 0.00 0.00 0.00 0.00 0.00 00:09:32.815 =================================================================================================================== 00:09:32.815 Total : 13975.00 54.59 0.00 0.00 0.00 0.00 0.00 00:09:32.815 00:09:33.747 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:33.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.747 Nvme0n1 : 2.00 14184.00 55.41 0.00 0.00 0.00 0.00 0.00 00:09:33.747 =================================================================================================================== 00:09:33.747 Total : 14184.00 55.41 0.00 0.00 0.00 0.00 0.00 00:09:33.747 00:09:34.005 true 00:09:34.005 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:34.005 11:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:34.571 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:34.571 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:34.571 11:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2027100 00:09:34.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.828 Nvme0n1 : 3.00 14295.33 55.84 0.00 0.00 0.00 0.00 0.00 00:09:34.828 =================================================================================================================== 00:09:34.828 Total : 14295.33 55.84 0.00 0.00 0.00 0.00 0.00 00:09:34.828 00:09:35.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.763 Nvme0n1 : 4.00 14342.00 56.02 0.00 0.00 0.00 0.00 0.00 00:09:35.763 =================================================================================================================== 00:09:35.763 Total : 14342.00 56.02 0.00 0.00 0.00 0.00 0.00 00:09:35.763 00:09:37.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.138 Nvme0n1 : 5.00 14408.20 56.28 0.00 0.00 0.00 0.00 0.00 00:09:37.138 =================================================================================================================== 00:09:37.138 Total : 14408.20 56.28 0.00 0.00 0.00 0.00 0.00 00:09:37.138 00:09:38.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.072 Nvme0n1 : 6.00 14444.00 56.42 0.00 0.00 0.00 0.00 0.00 00:09:38.072 =================================================================================================================== 00:09:38.072 Total : 14444.00 56.42 0.00 0.00 0.00 0.00 0.00 00:09:38.072 00:09:39.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.006 Nvme0n1 : 7.00 14487.43 56.59 0.00 0.00 0.00 0.00 0.00 00:09:39.006 =================================================================================================================== 00:09:39.006 Total : 14487.43 56.59 0.00 0.00 0.00 0.00 0.00 00:09:39.006 00:09:39.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.940 Nvme0n1 : 8.00 14532.50 56.77 0.00 0.00 0.00 0.00 0.00 00:09:39.940 =================================================================================================================== 00:09:39.940 Total : 14532.50 56.77 0.00 0.00 0.00 0.00 0.00 00:09:39.940 00:09:40.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.873 Nvme0n1 : 9.00 14554.78 56.85 0.00 0.00 0.00 0.00 0.00 00:09:40.873 =================================================================================================================== 00:09:40.873 Total : 14554.78 56.85 0.00 0.00 0.00 0.00 0.00 00:09:40.873 00:09:41.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.807 Nvme0n1 : 10.00 14589.50 56.99 0.00 0.00 0.00 0.00 0.00 00:09:41.807 =================================================================================================================== 00:09:41.807 Total : 14589.50 56.99 0.00 0.00 0.00 0.00 0.00 00:09:41.807 00:09:41.807 00:09:41.807 Latency(us) 00:09:41.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.807 Nvme0n1 : 10.01 14590.12 56.99 0.00 0.00 8766.78 4878.79 17476.27 00:09:41.807 =================================================================================================================== 00:09:41.807 Total : 14590.12 56.99 0.00 0.00 8766.78 4878.79 17476.27 00:09:41.807 0 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2026958 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2026958 ']' 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2026958 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2026958 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2026958' 00:09:41.807 killing process with pid 2026958 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2026958 00:09:41.807 Received shutdown signal, test time was about 10.000000 seconds 00:09:41.807 00:09:41.807 Latency(us) 00:09:41.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.807 =================================================================================================================== 00:09:41.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:41.807 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2026958 00:09:42.372 11:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:42.630 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:42.888 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:42.888 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.148 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:43.148 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:43.148 11:17:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.406 [2024-07-26 11:17:39.012602] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:43.406 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:43.972 request: 00:09:43.972 { 00:09:43.972 "uuid": "1fcc3a9c-20f2-4eb1-9e64-6ac037656b85", 00:09:43.972 "method": "bdev_lvol_get_lvstores", 00:09:43.972 "req_id": 1 00:09:43.972 } 00:09:43.972 Got JSON-RPC error response 00:09:43.972 response: 00:09:43.972 { 00:09:43.972 "code": -19, 00:09:43.972 "message": "No such device" 00:09:43.972 } 00:09:43.972 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:43.972 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.972 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:43.972 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.972 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.230 aio_bdev 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d93182a5-b3b4-491e-985c-855431b0e656 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d93182a5-b3b4-491e-985c-855431b0e656 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.230 11:17:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.795 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d93182a5-b3b4-491e-985c-855431b0e656 -t 2000 00:09:45.361 [ 00:09:45.361 { 00:09:45.361 "name": "d93182a5-b3b4-491e-985c-855431b0e656", 00:09:45.361 "aliases": [ 00:09:45.361 "lvs/lvol" 00:09:45.361 ], 00:09:45.361 "product_name": "Logical Volume", 00:09:45.361 "block_size": 4096, 00:09:45.361 "num_blocks": 38912, 00:09:45.361 "uuid": "d93182a5-b3b4-491e-985c-855431b0e656", 00:09:45.361 "assigned_rate_limits": { 00:09:45.361 "rw_ios_per_sec": 0, 00:09:45.361 "rw_mbytes_per_sec": 0, 00:09:45.361 "r_mbytes_per_sec": 0, 00:09:45.361 "w_mbytes_per_sec": 0 00:09:45.361 }, 00:09:45.361 "claimed": false, 00:09:45.361 "zoned": false, 00:09:45.361 "supported_io_types": { 00:09:45.361 "read": true, 00:09:45.361 "write": true, 00:09:45.361 "unmap": true, 00:09:45.361 "flush": false, 00:09:45.361 "reset": true, 00:09:45.361 "nvme_admin": false, 00:09:45.361 "nvme_io": false, 00:09:45.361 "nvme_io_md": false, 00:09:45.361 "write_zeroes": true, 00:09:45.361 "zcopy": false, 00:09:45.361 "get_zone_info": false, 00:09:45.361 "zone_management": false, 00:09:45.361 "zone_append": false, 00:09:45.361 "compare": false, 00:09:45.361 "compare_and_write": false, 00:09:45.361 "abort": false, 00:09:45.361 "seek_hole": true, 00:09:45.361 "seek_data": true, 00:09:45.361 "copy": false, 00:09:45.361 "nvme_iov_md": false 00:09:45.361 }, 00:09:45.361 "driver_specific": { 00:09:45.361 "lvol": { 00:09:45.361 "lvol_store_uuid": "1fcc3a9c-20f2-4eb1-9e64-6ac037656b85", 00:09:45.361 "base_bdev": "aio_bdev", 00:09:45.361 "thin_provision": false, 00:09:45.361 "num_allocated_clusters": 38, 00:09:45.361 "snapshot": false, 00:09:45.361 "clone": false, 00:09:45.361 "esnap_clone": false 00:09:45.361 } 00:09:45.361 } 00:09:45.361 } 00:09:45.361 ] 00:09:45.361 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:45.361 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:45.361 11:17:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:45.619 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:45.619 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:45.619 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.876 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.876 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d93182a5-b3b4-491e-985c-855431b0e656 00:09:46.134 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1fcc3a9c-20f2-4eb1-9e64-6ac037656b85 00:09:46.392 11:17:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:46.650 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.650 00:09:46.650 real 0m19.498s 00:09:46.650 user 0m19.111s 00:09:46.650 sys 0m2.268s 00:09:46.650 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.650 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:46.650 ************************************ 00:09:46.650 END TEST lvs_grow_clean 00:09:46.650 ************************************ 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:46.908 ************************************ 00:09:46.908 START TEST lvs_grow_dirty 00:09:46.908 ************************************ 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.908 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.165 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:47.166 11:17:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:47.423 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:09:47.423 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:09:47.423 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:48.027 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:48.027 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:48.027 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c lvol 150 00:09:48.027 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3953a40-031d-440a-a0be-be91a3e15ac6 00:09:48.027 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:48.027 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:48.591 [2024-07-26 11:17:43.958270] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:48.591 [2024-07-26 11:17:43.958385] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:48.591 true 00:09:48.591 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:09:48.591 11:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:49.157 11:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:49.157 11:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:49.415 11:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3953a40-031d-440a-a0be-be91a3e15ac6 00:09:49.980 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:50.238 [2024-07-26 11:17:45.643256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.238 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2029281 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2029281 /var/tmp/bdevperf.sock 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2029281 ']' 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:50.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.496 11:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.496 [2024-07-26 11:17:45.994011] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:09:50.496 [2024-07-26 11:17:45.994095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029281 ] 00:09:50.496 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.496 [2024-07-26 11:17:46.061196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.754 [2024-07-26 11:17:46.186816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.754 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.754 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:50.754 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:51.319 Nvme0n1 00:09:51.319 11:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:51.884 [ 00:09:51.884 { 00:09:51.884 "name": "Nvme0n1", 00:09:51.884 "aliases": [ 00:09:51.884 "c3953a40-031d-440a-a0be-be91a3e15ac6" 00:09:51.884 ], 00:09:51.884 "product_name": "NVMe disk", 00:09:51.884 "block_size": 4096, 00:09:51.884 "num_blocks": 38912, 00:09:51.884 "uuid": "c3953a40-031d-440a-a0be-be91a3e15ac6", 00:09:51.884 "assigned_rate_limits": { 00:09:51.884 "rw_ios_per_sec": 0, 00:09:51.884 "rw_mbytes_per_sec": 0, 00:09:51.884 "r_mbytes_per_sec": 0, 00:09:51.884 "w_mbytes_per_sec": 0 00:09:51.884 }, 00:09:51.884 "claimed": false, 00:09:51.884 "zoned": false, 00:09:51.884 "supported_io_types": { 00:09:51.884 "read": true, 00:09:51.884 "write": true, 00:09:51.884 "unmap": true, 00:09:51.884 "flush": true, 00:09:51.884 "reset": true, 00:09:51.884 "nvme_admin": true, 00:09:51.884 "nvme_io": true, 00:09:51.884 "nvme_io_md": false, 00:09:51.884 "write_zeroes": true, 00:09:51.884 "zcopy": false, 00:09:51.884 "get_zone_info": false, 00:09:51.884 "zone_management": false, 00:09:51.884 "zone_append": false, 00:09:51.884 "compare": true, 00:09:51.884 "compare_and_write": true, 00:09:51.884 "abort": true, 00:09:51.884 "seek_hole": false, 00:09:51.884 "seek_data": false, 00:09:51.884 "copy": true, 00:09:51.884 "nvme_iov_md": false 00:09:51.884 }, 00:09:51.884 "memory_domains": [ 00:09:51.884 { 00:09:51.884 "dma_device_id": "system", 00:09:51.884 "dma_device_type": 1 00:09:51.884 } 00:09:51.884 ], 00:09:51.884 "driver_specific": { 00:09:51.884 "nvme": [ 00:09:51.884 { 00:09:51.884 "trid": { 00:09:51.884 "trtype": "TCP", 00:09:51.884 "adrfam": "IPv4", 00:09:51.884 "traddr": "10.0.0.2", 00:09:51.884 "trsvcid": "4420", 00:09:51.884 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:51.884 }, 00:09:51.884 "ctrlr_data": { 00:09:51.884 "cntlid": 1, 00:09:51.884 "vendor_id": "0x8086", 00:09:51.884 "model_number": "SPDK bdev Controller", 00:09:51.884 "serial_number": "SPDK0", 00:09:51.884 "firmware_revision": "24.09", 00:09:51.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:51.884 "oacs": { 00:09:51.884 "security": 0, 00:09:51.884 "format": 0, 00:09:51.884 "firmware": 0, 00:09:51.884 "ns_manage": 0 00:09:51.884 }, 00:09:51.884 "multi_ctrlr": true, 00:09:51.884 "ana_reporting": false 00:09:51.884 }, 00:09:51.884 "vs": { 00:09:51.884 "nvme_version": "1.3" 00:09:51.884 }, 00:09:51.884 "ns_data": { 00:09:51.884 "id": 1, 00:09:51.884 "can_share": true 00:09:51.884 } 00:09:51.884 } 00:09:51.884 ], 00:09:51.885 "mp_policy": "active_passive" 00:09:51.885 } 00:09:51.885 } 00:09:51.885 ] 00:09:51.885 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2029514 00:09:51.885 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:51.885 11:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:51.885 Running I/O for 10 seconds... 00:09:53.258 Latency(us) 00:09:53.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.258 Nvme0n1 : 1.00 14079.00 55.00 0.00 0.00 0.00 0.00 0.00 00:09:53.258 =================================================================================================================== 00:09:53.258 Total : 14079.00 55.00 0.00 0.00 0.00 0.00 0.00 00:09:53.258 00:09:53.824 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:09:54.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.081 Nvme0n1 : 2.00 14216.50 55.53 0.00 0.00 0.00 0.00 0.00 00:09:54.081 =================================================================================================================== 00:09:54.081 Total : 14216.50 55.53 0.00 0.00 0.00 0.00 0.00 00:09:54.081 00:09:54.081 true 00:09:54.081 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:09:54.081 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:54.338 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:54.338 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:54.338 11:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2029514 00:09:54.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.904 Nvme0n1 : 3.00 14315.00 55.92 0.00 0.00 0.00 0.00 0.00 00:09:54.904 =================================================================================================================== 00:09:54.904 Total : 14315.00 55.92 0.00 0.00 0.00 0.00 0.00 00:09:54.904 00:09:55.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.838 Nvme0n1 : 4.00 14386.25 56.20 0.00 0.00 0.00 0.00 0.00 00:09:55.838 =================================================================================================================== 00:09:55.838 Total : 14386.25 56.20 0.00 0.00 0.00 0.00 0.00 00:09:55.838 00:09:57.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.212 Nvme0n1 : 5.00 14427.60 56.36 0.00 0.00 0.00 0.00 0.00 00:09:57.212 =================================================================================================================== 00:09:57.212 Total : 14427.60 56.36 0.00 0.00 0.00 0.00 0.00 00:09:57.212 00:09:58.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.145 Nvme0n1 : 6.00 14478.33 56.56 0.00 0.00 0.00 0.00 0.00 00:09:58.145 =================================================================================================================== 00:09:58.145 Total : 14478.33 56.56 0.00 0.00 0.00 0.00 0.00 00:09:58.145 00:09:59.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.077 Nvme0n1 : 7.00 14522.86 56.73 0.00 0.00 0.00 0.00 0.00 00:09:59.077 =================================================================================================================== 00:09:59.078 Total : 14522.86 56.73 0.00 0.00 0.00 0.00 0.00 00:09:59.078 00:10:00.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.011 Nvme0n1 : 8.00 14550.12 56.84 0.00 0.00 0.00 0.00 0.00 00:10:00.011 =================================================================================================================== 00:10:00.011 Total : 14550.12 56.84 0.00 0.00 0.00 0.00 0.00 00:10:00.011 00:10:00.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.942 Nvme0n1 : 9.00 14571.22 56.92 0.00 0.00 0.00 0.00 0.00 00:10:00.942 =================================================================================================================== 00:10:00.942 Total : 14571.22 56.92 0.00 0.00 0.00 0.00 0.00 00:10:00.942 00:10:01.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.872 Nvme0n1 : 10.00 14593.30 57.01 0.00 0.00 0.00 0.00 0.00 00:10:01.872 =================================================================================================================== 00:10:01.872 Total : 14593.30 57.01 0.00 0.00 0.00 0.00 0.00 00:10:01.872 00:10:01.872 00:10:01.872 Latency(us) 00:10:01.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.872 Nvme0n1 : 10.01 14595.69 57.01 0.00 0.00 8763.53 2269.49 16699.54 00:10:01.872 =================================================================================================================== 00:10:01.872 Total : 14595.69 57.01 0.00 0.00 8763.53 2269.49 16699.54 00:10:01.872 0 00:10:01.872 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2029281 00:10:01.872 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2029281 ']' 00:10:01.872 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2029281 00:10:01.872 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2029281 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2029281' 00:10:02.129 killing process with pid 2029281 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2029281 00:10:02.129 Received shutdown signal, test time was about 10.000000 seconds 00:10:02.129 00:10:02.129 Latency(us) 00:10:02.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.129 =================================================================================================================== 00:10:02.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:02.129 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2029281 00:10:02.386 11:17:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.949 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.206 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:03.206 11:17:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2026398 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2026398 00:10:03.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2026398 Killed "${NVMF_APP[@]}" "$@" 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2030882 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2030882 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2030882 ']' 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.803 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.803 [2024-07-26 11:17:59.259091] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:03.803 [2024-07-26 11:17:59.259188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.803 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.803 [2024-07-26 11:17:59.339027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.803 [2024-07-26 11:17:59.463058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.803 [2024-07-26 11:17:59.463120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.803 [2024-07-26 11:17:59.463138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.803 [2024-07-26 11:17:59.463152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.803 [2024-07-26 11:17:59.463165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.803 [2024-07-26 11:17:59.463208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.060 11:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.623 [2024-07-26 11:18:00.089331] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:04.623 [2024-07-26 11:18:00.089515] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:04.623 [2024-07-26 11:18:00.089573] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c3953a40-031d-440a-a0be-be91a3e15ac6 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c3953a40-031d-440a-a0be-be91a3e15ac6 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.623 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:04.880 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3953a40-031d-440a-a0be-be91a3e15ac6 -t 2000 00:10:05.137 [ 00:10:05.137 { 00:10:05.137 "name": "c3953a40-031d-440a-a0be-be91a3e15ac6", 00:10:05.137 "aliases": [ 00:10:05.137 "lvs/lvol" 00:10:05.137 ], 00:10:05.137 "product_name": "Logical Volume", 00:10:05.137 "block_size": 4096, 00:10:05.137 "num_blocks": 38912, 00:10:05.137 "uuid": "c3953a40-031d-440a-a0be-be91a3e15ac6", 00:10:05.137 "assigned_rate_limits": { 00:10:05.137 "rw_ios_per_sec": 0, 00:10:05.137 "rw_mbytes_per_sec": 0, 00:10:05.137 "r_mbytes_per_sec": 0, 00:10:05.137 "w_mbytes_per_sec": 0 00:10:05.137 }, 00:10:05.137 "claimed": false, 00:10:05.137 "zoned": false, 00:10:05.137 "supported_io_types": { 00:10:05.137 "read": true, 00:10:05.137 "write": true, 00:10:05.137 "unmap": true, 00:10:05.137 "flush": false, 00:10:05.137 "reset": true, 00:10:05.137 "nvme_admin": false, 00:10:05.137 "nvme_io": false, 00:10:05.137 "nvme_io_md": false, 00:10:05.137 "write_zeroes": true, 00:10:05.137 "zcopy": false, 00:10:05.137 "get_zone_info": false, 00:10:05.137 "zone_management": false, 00:10:05.137 "zone_append": false, 00:10:05.137 "compare": false, 00:10:05.137 "compare_and_write": false, 00:10:05.137 "abort": false, 00:10:05.137 "seek_hole": true, 00:10:05.137 "seek_data": true, 00:10:05.137 "copy": false, 00:10:05.137 "nvme_iov_md": false 00:10:05.137 }, 00:10:05.137 "driver_specific": { 00:10:05.137 "lvol": { 00:10:05.137 "lvol_store_uuid": "ecfa76bb-fb4e-43da-9cac-2c790fdff74c", 00:10:05.137 "base_bdev": "aio_bdev", 00:10:05.137 "thin_provision": false, 00:10:05.137 "num_allocated_clusters": 38, 00:10:05.137 "snapshot": false, 00:10:05.137 "clone": false, 00:10:05.137 "esnap_clone": false 00:10:05.137 } 00:10:05.137 } 00:10:05.137 } 00:10:05.137 ] 00:10:05.137 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:05.137 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:05.137 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:05.395 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:05.395 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:05.395 11:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:05.652 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:05.652 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:05.910 [2024-07-26 11:18:01.562562] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:06.168 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:06.426 request: 00:10:06.426 { 00:10:06.426 "uuid": "ecfa76bb-fb4e-43da-9cac-2c790fdff74c", 00:10:06.426 "method": "bdev_lvol_get_lvstores", 00:10:06.426 "req_id": 1 00:10:06.426 } 00:10:06.426 Got JSON-RPC error response 00:10:06.426 response: 00:10:06.426 { 00:10:06.426 "code": -19, 00:10:06.426 "message": "No such device" 00:10:06.426 } 00:10:06.426 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:06.426 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:06.426 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:06.426 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:06.426 11:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:06.684 aio_bdev 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c3953a40-031d-440a-a0be-be91a3e15ac6 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c3953a40-031d-440a-a0be-be91a3e15ac6 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.684 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:06.941 11:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3953a40-031d-440a-a0be-be91a3e15ac6 -t 2000 00:10:07.506 [ 00:10:07.506 { 00:10:07.506 "name": "c3953a40-031d-440a-a0be-be91a3e15ac6", 00:10:07.506 "aliases": [ 00:10:07.506 "lvs/lvol" 00:10:07.506 ], 00:10:07.506 "product_name": "Logical Volume", 00:10:07.506 "block_size": 4096, 00:10:07.506 "num_blocks": 38912, 00:10:07.506 "uuid": "c3953a40-031d-440a-a0be-be91a3e15ac6", 00:10:07.506 "assigned_rate_limits": { 00:10:07.506 "rw_ios_per_sec": 0, 00:10:07.506 "rw_mbytes_per_sec": 0, 00:10:07.506 "r_mbytes_per_sec": 0, 00:10:07.506 "w_mbytes_per_sec": 0 00:10:07.506 }, 00:10:07.506 "claimed": false, 00:10:07.506 "zoned": false, 00:10:07.506 "supported_io_types": { 00:10:07.506 "read": true, 00:10:07.506 "write": true, 00:10:07.506 "unmap": true, 00:10:07.506 "flush": false, 00:10:07.506 "reset": true, 00:10:07.506 "nvme_admin": false, 00:10:07.506 "nvme_io": false, 00:10:07.506 "nvme_io_md": false, 00:10:07.506 "write_zeroes": true, 00:10:07.506 "zcopy": false, 00:10:07.506 "get_zone_info": false, 00:10:07.506 "zone_management": false, 00:10:07.506 "zone_append": false, 00:10:07.506 "compare": false, 00:10:07.506 "compare_and_write": false, 00:10:07.507 "abort": false, 00:10:07.507 "seek_hole": true, 00:10:07.507 "seek_data": true, 00:10:07.507 "copy": false, 00:10:07.507 "nvme_iov_md": false 00:10:07.507 }, 00:10:07.507 "driver_specific": { 00:10:07.507 "lvol": { 00:10:07.507 "lvol_store_uuid": "ecfa76bb-fb4e-43da-9cac-2c790fdff74c", 00:10:07.507 "base_bdev": "aio_bdev", 00:10:07.507 "thin_provision": false, 00:10:07.507 "num_allocated_clusters": 38, 00:10:07.507 "snapshot": false, 00:10:07.507 "clone": false, 00:10:07.507 "esnap_clone": false 00:10:07.507 } 00:10:07.507 } 00:10:07.507 } 00:10:07.507 ] 00:10:07.507 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:07.507 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:07.507 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:07.765 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:07.765 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:07.765 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:08.021 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:08.021 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c3953a40-031d-440a-a0be-be91a3e15ac6 00:10:08.278 11:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ecfa76bb-fb4e-43da-9cac-2c790fdff74c 00:10:08.843 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:09.101 00:10:09.101 real 0m22.193s 00:10:09.101 user 0m55.367s 00:10:09.101 sys 0m5.371s 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:09.101 ************************************ 00:10:09.101 END TEST lvs_grow_dirty 00:10:09.101 ************************************ 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:09.101 nvmf_trace.0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.101 rmmod nvme_tcp 00:10:09.101 rmmod nvme_fabrics 00:10:09.101 rmmod nvme_keyring 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2030882 ']' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2030882 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2030882 ']' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2030882 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2030882 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2030882' 00:10:09.101 killing process with pid 2030882 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2030882 00:10:09.101 11:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2030882 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.360 11:18:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.894 00:10:11.894 real 0m48.078s 00:10:11.894 user 1m21.582s 00:10:11.894 sys 0m10.248s 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.894 ************************************ 00:10:11.894 END TEST nvmf_lvs_grow 00:10:11.894 ************************************ 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.894 ************************************ 00:10:11.894 START TEST nvmf_bdev_io_wait 00:10:11.894 ************************************ 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:11.894 * Looking for test storage... 00:10:11.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:11.894 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.895 11:18:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:14.429 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:14.429 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:14.429 Found net devices under 0000:84:00.0: cvl_0_0 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:14.429 Found net devices under 0000:84:00.1: cvl_0_1 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.429 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:10:14.430 00:10:14.430 --- 10.0.0.2 ping statistics --- 00:10:14.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.430 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:14.430 00:10:14.430 --- 10.0.0.1 ping statistics --- 00:10:14.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.430 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2034134 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2034134 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2034134 ']' 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.430 11:18:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.430 [2024-07-26 11:18:09.985283] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:14.430 [2024-07-26 11:18:09.985390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.430 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.430 [2024-07-26 11:18:10.089447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.688 [2024-07-26 11:18:10.214139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.688 [2024-07-26 11:18:10.214203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.688 [2024-07-26 11:18:10.214220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.688 [2024-07-26 11:18:10.214233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.688 [2024-07-26 11:18:10.214244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.688 [2024-07-26 11:18:10.214312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.688 [2024-07-26 11:18:10.214366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.688 [2024-07-26 11:18:10.214417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.688 [2024-07-26 11:18:10.214420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.688 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.947 [2024-07-26 11:18:10.360132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.947 Malloc0 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:14.947 [2024-07-26 11:18:10.426913] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2034318 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2034321 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.947 { 00:10:14.947 "params": { 00:10:14.947 "name": "Nvme$subsystem", 00:10:14.947 "trtype": "$TEST_TRANSPORT", 00:10:14.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.947 "adrfam": "ipv4", 00:10:14.947 "trsvcid": "$NVMF_PORT", 00:10:14.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.947 "hdgst": ${hdgst:-false}, 00:10:14.947 "ddgst": ${ddgst:-false} 00:10:14.947 }, 00:10:14.947 "method": "bdev_nvme_attach_controller" 00:10:14.947 } 00:10:14.947 EOF 00:10:14.947 )") 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2034324 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.947 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.947 { 00:10:14.947 "params": { 00:10:14.948 "name": "Nvme$subsystem", 00:10:14.948 "trtype": "$TEST_TRANSPORT", 00:10:14.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "$NVMF_PORT", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.948 "hdgst": ${hdgst:-false}, 00:10:14.948 "ddgst": ${ddgst:-false} 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 } 00:10:14.948 EOF 00:10:14.948 )") 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2034327 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.948 { 00:10:14.948 "params": { 00:10:14.948 "name": "Nvme$subsystem", 00:10:14.948 "trtype": "$TEST_TRANSPORT", 00:10:14.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "$NVMF_PORT", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.948 "hdgst": ${hdgst:-false}, 00:10:14.948 "ddgst": ${ddgst:-false} 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 } 00:10:14.948 EOF 00:10:14.948 )") 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.948 { 00:10:14.948 "params": { 00:10:14.948 "name": "Nvme$subsystem", 00:10:14.948 "trtype": "$TEST_TRANSPORT", 00:10:14.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "$NVMF_PORT", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.948 "hdgst": ${hdgst:-false}, 00:10:14.948 "ddgst": ${ddgst:-false} 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 } 00:10:14.948 EOF 00:10:14.948 )") 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2034318 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.948 "params": { 00:10:14.948 "name": "Nvme1", 00:10:14.948 "trtype": "tcp", 00:10:14.948 "traddr": "10.0.0.2", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "4420", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.948 "hdgst": false, 00:10:14.948 "ddgst": false 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 }' 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.948 "params": { 00:10:14.948 "name": "Nvme1", 00:10:14.948 "trtype": "tcp", 00:10:14.948 "traddr": "10.0.0.2", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "4420", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.948 "hdgst": false, 00:10:14.948 "ddgst": false 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 }' 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.948 "params": { 00:10:14.948 "name": "Nvme1", 00:10:14.948 "trtype": "tcp", 00:10:14.948 "traddr": "10.0.0.2", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "4420", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.948 "hdgst": false, 00:10:14.948 "ddgst": false 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 }' 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:14.948 11:18:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.948 "params": { 00:10:14.948 "name": "Nvme1", 00:10:14.948 "trtype": "tcp", 00:10:14.948 "traddr": "10.0.0.2", 00:10:14.948 "adrfam": "ipv4", 00:10:14.948 "trsvcid": "4420", 00:10:14.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.948 "hdgst": false, 00:10:14.948 "ddgst": false 00:10:14.948 }, 00:10:14.948 "method": "bdev_nvme_attach_controller" 00:10:14.948 }' 00:10:14.948 [2024-07-26 11:18:10.477544] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:14.948 [2024-07-26 11:18:10.477633] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:14.948 [2024-07-26 11:18:10.478498] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:14.948 [2024-07-26 11:18:10.478497] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:14.948 [2024-07-26 11:18:10.478525] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:14.948 [2024-07-26 11:18:10.478586] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 11:18:10.478586] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 11:18:10.478589] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:14.948 --proc-type=auto ] 00:10:14.948 --proc-type=auto ] 00:10:14.948 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.206 [2024-07-26 11:18:10.643605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.206 [2024-07-26 11:18:10.723946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.206 [2024-07-26 11:18:10.746661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.206 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.206 [2024-07-26 11:18:10.804168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.206 [2024-07-26 11:18:10.823504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:15.465 [2024-07-26 11:18:10.903115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:15.465 [2024-07-26 11:18:10.920687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.465 [2024-07-26 11:18:11.030929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.723 Running I/O for 1 seconds... 00:10:15.723 Running I/O for 1 seconds... 00:10:15.723 Running I/O for 1 seconds... 00:10:15.723 Running I/O for 1 seconds... 00:10:16.658 00:10:16.658 Latency(us) 00:10:16.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.658 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:16.658 Nvme1n1 : 1.00 177820.61 694.61 0.00 0.00 716.80 286.72 885.95 00:10:16.658 =================================================================================================================== 00:10:16.658 Total : 177820.61 694.61 0.00 0.00 716.80 286.72 885.95 00:10:16.658 00:10:16.658 Latency(us) 00:10:16.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.658 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:16.658 Nvme1n1 : 1.01 9211.63 35.98 0.00 0.00 13827.74 9077.95 20680.25 00:10:16.658 =================================================================================================================== 00:10:16.658 Total : 9211.63 35.98 0.00 0.00 13827.74 9077.95 20680.25 00:10:16.658 00:10:16.658 Latency(us) 00:10:16.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.658 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:16.658 Nvme1n1 : 1.01 8507.82 33.23 0.00 0.00 14980.00 7864.32 25049.32 00:10:16.658 =================================================================================================================== 00:10:16.658 Total : 8507.82 33.23 0.00 0.00 14980.00 7864.32 25049.32 00:10:16.658 00:10:16.658 Latency(us) 00:10:16.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.658 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:16.658 Nvme1n1 : 1.01 7678.84 30.00 0.00 0.00 16592.42 5922.51 27767.85 00:10:16.658 =================================================================================================================== 00:10:16.658 Total : 7678.84 30.00 0.00 0.00 16592.42 5922.51 27767.85 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2034321 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2034324 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2034327 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:17.225 rmmod nvme_tcp 00:10:17.225 rmmod nvme_fabrics 00:10:17.225 rmmod nvme_keyring 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2034134 ']' 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2034134 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2034134 ']' 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2034134 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2034134 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2034134' 00:10:17.225 killing process with pid 2034134 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2034134 00:10:17.225 11:18:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2034134 00:10:17.484 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.484 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.484 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.484 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.484 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.485 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.485 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.485 11:18:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:20.018 00:10:20.018 real 0m7.942s 00:10:20.018 user 0m17.243s 00:10:20.018 sys 0m4.223s 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:20.018 ************************************ 00:10:20.018 END TEST nvmf_bdev_io_wait 00:10:20.018 ************************************ 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.018 ************************************ 00:10:20.018 START TEST nvmf_queue_depth 00:10:20.018 ************************************ 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:20.018 * Looking for test storage... 00:10:20.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.018 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.019 11:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:22.578 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:22.579 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:22.579 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:22.579 Found net devices under 0000:84:00.0: cvl_0_0 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:22.579 Found net devices under 0000:84:00.1: cvl_0_1 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:22.579 11:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:22.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:10:22.579 00:10:22.579 --- 10.0.0.2 ping statistics --- 00:10:22.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.579 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:10:22.579 00:10:22.579 --- 10.0.0.1 ping statistics --- 00:10:22.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.579 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.579 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2036575 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2036575 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2036575 ']' 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.580 11:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:22.580 [2024-07-26 11:18:18.135820] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:22.580 [2024-07-26 11:18:18.135915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.580 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.580 [2024-07-26 11:18:18.218818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.838 [2024-07-26 11:18:18.356865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.838 [2024-07-26 11:18:18.356934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.838 [2024-07-26 11:18:18.356954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.838 [2024-07-26 11:18:18.356971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.838 [2024-07-26 11:18:18.356986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.838 [2024-07-26 11:18:18.357023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 [2024-07-26 11:18:19.245382] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 Malloc0 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 [2024-07-26 11:18:19.310927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2036727 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2036727 /var/tmp/bdevperf.sock 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2036727 ']' 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:23.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.774 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.774 [2024-07-26 11:18:19.370706] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:23.774 [2024-07-26 11:18:19.370800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036727 ] 00:10:23.774 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.032 [2024-07-26 11:18:19.445424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.032 [2024-07-26 11:18:19.566969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.032 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.032 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:24.032 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:24.032 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.032 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:24.290 NVMe0n1 00:10:24.290 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.290 11:18:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:24.549 Running I/O for 10 seconds... 00:10:34.514 00:10:34.514 Latency(us) 00:10:34.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.514 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:34.514 Verification LBA range: start 0x0 length 0x4000 00:10:34.514 NVMe0n1 : 10.07 8139.45 31.79 0.00 0.00 125203.29 17670.45 84274.44 00:10:34.514 =================================================================================================================== 00:10:34.514 Total : 8139.45 31.79 0.00 0.00 125203.29 17670.45 84274.44 00:10:34.514 0 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2036727 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2036727 ']' 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2036727 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2036727 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2036727' 00:10:34.514 killing process with pid 2036727 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2036727 00:10:34.514 Received shutdown signal, test time was about 10.000000 seconds 00:10:34.514 00:10:34.514 Latency(us) 00:10:34.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.514 =================================================================================================================== 00:10:34.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:34.514 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2036727 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.773 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.773 rmmod nvme_tcp 00:10:34.773 rmmod nvme_fabrics 00:10:35.032 rmmod nvme_keyring 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2036575 ']' 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2036575 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2036575 ']' 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2036575 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2036575 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2036575' 00:10:35.032 killing process with pid 2036575 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2036575 00:10:35.032 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2036575 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.291 11:18:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:37.824 00:10:37.824 real 0m17.772s 00:10:37.824 user 0m24.057s 00:10:37.824 sys 0m3.937s 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:37.824 ************************************ 00:10:37.824 END TEST nvmf_queue_depth 00:10:37.824 ************************************ 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.824 ************************************ 00:10:37.824 START TEST nvmf_target_multipath 00:10:37.824 ************************************ 00:10:37.824 11:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:37.824 * Looking for test storage... 00:10:37.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.824 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:37.825 11:18:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:40.355 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:40.355 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.355 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:40.356 Found net devices under 0000:84:00.0: cvl_0_0 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:40.356 Found net devices under 0000:84:00.1: cvl_0_1 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:10:40.356 00:10:40.356 --- 10.0.0.2 ping statistics --- 00:10:40.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.356 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:10:40.356 00:10:40.356 --- 10.0.0.1 ping statistics --- 00:10:40.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.356 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.356 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:40.357 only one NIC for nvmf test 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.357 rmmod nvme_tcp 00:10:40.357 rmmod nvme_fabrics 00:10:40.357 rmmod nvme_keyring 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.357 11:18:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.887 00:10:42.887 real 0m5.107s 00:10:42.887 user 0m0.922s 00:10:42.887 sys 0m2.184s 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:42.887 ************************************ 00:10:42.887 END TEST nvmf_target_multipath 00:10:42.887 ************************************ 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.887 ************************************ 00:10:42.887 START TEST nvmf_zcopy 00:10:42.887 ************************************ 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:42.887 * Looking for test storage... 00:10:42.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.887 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.888 11:18:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:45.418 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:45.418 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:45.418 Found net devices under 0000:84:00.0: cvl_0_0 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:45.418 Found net devices under 0000:84:00.1: cvl_0_1 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.418 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:45.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:10:45.419 00:10:45.419 --- 10.0.0.2 ping statistics --- 00:10:45.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.419 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:10:45.419 00:10:45.419 --- 10.0.0.1 ping statistics --- 00:10:45.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.419 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.419 11:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2042071 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2042071 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2042071 ']' 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.419 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.419 [2024-07-26 11:18:41.076353] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:45.419 [2024-07-26 11:18:41.076503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.678 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.678 [2024-07-26 11:18:41.176029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.678 [2024-07-26 11:18:41.318093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.678 [2024-07-26 11:18:41.318172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.678 [2024-07-26 11:18:41.318192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.678 [2024-07-26 11:18:41.318209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.678 [2024-07-26 11:18:41.318223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.678 [2024-07-26 11:18:41.318265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 [2024-07-26 11:18:41.493733] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 [2024-07-26 11:18:41.509978] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 malloc0 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:45.937 { 00:10:45.937 "params": { 00:10:45.937 "name": "Nvme$subsystem", 00:10:45.937 "trtype": "$TEST_TRANSPORT", 00:10:45.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.937 "adrfam": "ipv4", 00:10:45.937 "trsvcid": "$NVMF_PORT", 00:10:45.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.937 "hdgst": ${hdgst:-false}, 00:10:45.937 "ddgst": ${ddgst:-false} 00:10:45.937 }, 00:10:45.937 "method": "bdev_nvme_attach_controller" 00:10:45.937 } 00:10:45.937 EOF 00:10:45.937 )") 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:45.937 11:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:45.937 "params": { 00:10:45.937 "name": "Nvme1", 00:10:45.937 "trtype": "tcp", 00:10:45.937 "traddr": "10.0.0.2", 00:10:45.937 "adrfam": "ipv4", 00:10:45.937 "trsvcid": "4420", 00:10:45.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.937 "hdgst": false, 00:10:45.937 "ddgst": false 00:10:45.937 }, 00:10:45.937 "method": "bdev_nvme_attach_controller" 00:10:45.937 }' 00:10:46.195 [2024-07-26 11:18:41.615301] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:46.195 [2024-07-26 11:18:41.615396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042104 ] 00:10:46.195 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.195 [2024-07-26 11:18:41.684827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.195 [2024-07-26 11:18:41.806731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.794 Running I/O for 10 seconds... 00:10:56.761 00:10:56.761 Latency(us) 00:10:56.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.761 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:56.761 Verification LBA range: start 0x0 length 0x1000 00:10:56.761 Nvme1n1 : 10.01 5584.43 43.63 0.00 0.00 22858.15 4102.07 32039.82 00:10:56.761 =================================================================================================================== 00:10:56.761 Total : 5584.43 43.63 0.00 0.00 22858.15 4102.07 32039.82 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2043419 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:57.019 { 00:10:57.019 "params": { 00:10:57.019 "name": "Nvme$subsystem", 00:10:57.019 "trtype": "$TEST_TRANSPORT", 00:10:57.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.019 "adrfam": "ipv4", 00:10:57.019 "trsvcid": "$NVMF_PORT", 00:10:57.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.019 "hdgst": ${hdgst:-false}, 00:10:57.019 "ddgst": ${ddgst:-false} 00:10:57.019 }, 00:10:57.019 "method": "bdev_nvme_attach_controller" 00:10:57.019 } 00:10:57.019 EOF 00:10:57.019 )") 00:10:57.019 [2024-07-26 11:18:52.464896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.019 [2024-07-26 11:18:52.464956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:57.019 [2024-07-26 11:18:52.472863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.019 [2024-07-26 11:18:52.472897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.019 11:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:57.019 "params": { 00:10:57.020 "name": "Nvme1", 00:10:57.020 "trtype": "tcp", 00:10:57.020 "traddr": "10.0.0.2", 00:10:57.020 "adrfam": "ipv4", 00:10:57.020 "trsvcid": "4420", 00:10:57.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.020 "hdgst": false, 00:10:57.020 "ddgst": false 00:10:57.020 }, 00:10:57.020 "method": "bdev_nvme_attach_controller" 00:10:57.020 }' 00:10:57.020 [2024-07-26 11:18:52.480884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.480917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.488909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.488940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.496928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.496959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.504950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.504980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.512970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.512999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.520993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.521024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.529014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.529045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.537037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.537066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.545062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.545092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.553085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.553116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.560261] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:10:57.020 [2024-07-26 11:18:52.560426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043419 ] 00:10:57.020 [2024-07-26 11:18:52.561111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.561142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.569131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.569163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.577156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.577186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.585179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.585209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.593201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.593232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.601226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.601257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.609247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.609277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.617270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.617300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.625293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.625323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.020 [2024-07-26 11:18:52.633316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.633347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.641319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.641344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.649339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.649364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.657359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.657384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.665380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.665405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.020 [2024-07-26 11:18:52.672091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.020 [2024-07-26 11:18:52.673401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.020 [2024-07-26 11:18:52.673426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.681465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.681503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.689462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.689494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.697476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.697501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.705494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.705520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.713514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.713539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.721536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.721561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.729560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.729586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.737593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.737620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.745623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.745671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.753625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.753651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.278 [2024-07-26 11:18:52.761646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.278 [2024-07-26 11:18:52.761672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.769670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.769695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.777691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.777716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.785715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.785740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.793738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.793763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.796381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.279 [2024-07-26 11:18:52.801759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.801784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.809785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.809811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.817822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.817857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.825848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.825885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.833874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.833911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.841892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.841928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.849919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.849956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.857940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.857978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.865942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.865968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.873974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.874004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.882001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.882036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.890022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.890071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.898031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.898057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.906050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.906076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.914072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.914097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.922104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.922135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.930123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.930151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.279 [2024-07-26 11:18:52.938172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.279 [2024-07-26 11:18:52.938199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.946163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.946190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.954185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.954210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.962207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.962233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.970226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.970250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.978247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.978272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.986277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.986306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.537 [2024-07-26 11:18:52.994301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.537 [2024-07-26 11:18:52.994329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.002328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.002356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.010375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.010408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.018675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.018706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.026395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.026423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 Running I/O for 5 seconds... 00:10:57.538 [2024-07-26 11:18:53.034417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.034452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.050631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.050670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.063579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.063611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.075579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.075616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.087412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.087462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.099035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.099067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.110906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.110939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.122455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.122495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.134231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.134264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.146620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.146655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.158660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.158704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.170378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.170409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.183588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.183620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.538 [2024-07-26 11:18:53.193845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.538 [2024-07-26 11:18:53.193877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.205495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.205527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.217316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.217348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.229524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.229555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.243392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.243424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.254176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.254206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.265259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.265289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.277099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.277138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.288984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.289015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.300639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.300670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.314401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.314440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.324994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.325036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.336348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.336379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.348058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.348088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.359275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.359306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.370955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.370987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.382179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.382209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.393373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.393404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.404799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.404829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.416260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.416292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.427538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.427569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.439080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.439112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.796 [2024-07-26 11:18:53.450746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.796 [2024-07-26 11:18:53.450787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.054 [2024-07-26 11:18:53.462446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.054 [2024-07-26 11:18:53.462477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.054 [2024-07-26 11:18:53.473851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.054 [2024-07-26 11:18:53.473883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.054 [2024-07-26 11:18:53.485764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.054 [2024-07-26 11:18:53.485796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.054 [2024-07-26 11:18:53.497053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.054 [2024-07-26 11:18:53.497096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.508587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.508618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.520125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.520155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.531961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.531992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.543514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.543545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.554935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.554966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.566759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.566790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.578298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.578328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.591340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.591371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.602692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.602722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.614108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.614139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.625379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.625409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.637128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.637158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.648351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.648382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.660155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.660187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.671270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.671301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.682932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.682963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.694303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.694334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.055 [2024-07-26 11:18:53.705729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.055 [2024-07-26 11:18:53.705760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.717019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.717050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.728464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.728496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.739893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.739924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.750841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.750873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.762346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.762376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.773959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.773990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.785307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.785338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.796644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.796675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.807948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.807978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.819586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.819617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.831269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.831300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.842986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.843017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.854202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.854234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.865403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.865445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.876724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.876765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.888039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.888070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.899606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.899637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.911331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.911362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.922762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.922793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.934819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.934850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.946648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.946679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.958088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.958118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.313 [2024-07-26 11:18:53.969925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.313 [2024-07-26 11:18:53.969955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:53.981841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:53.981872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:53.993552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:53.993594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.005704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.005735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.017088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.017119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.028302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.028333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.039951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.039981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.051216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.051247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.063053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.063084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.074646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.074677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.085959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.085990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.097093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.097124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.108387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.108418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.120002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.120032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.131628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.131659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.143156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.143187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.155025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.155056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.571 [2024-07-26 11:18:54.166246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.571 [2024-07-26 11:18:54.166277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.572 [2024-07-26 11:18:54.177600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.572 [2024-07-26 11:18:54.177631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.572 [2024-07-26 11:18:54.188829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.572 [2024-07-26 11:18:54.188861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.572 [2024-07-26 11:18:54.200671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.572 [2024-07-26 11:18:54.200702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.572 [2024-07-26 11:18:54.211981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.572 [2024-07-26 11:18:54.212012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.572 [2024-07-26 11:18:54.223334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.572 [2024-07-26 11:18:54.223365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.235118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.235151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.247177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.247209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.258502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.258535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.270099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.270132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.281826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.281857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.293050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.293084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.304372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.304402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.316282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.316326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.330300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.330337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.341870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.341902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.353128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.353160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.364525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.364563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.830 [2024-07-26 11:18:54.376068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.830 [2024-07-26 11:18:54.376099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.387645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.387676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.399167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.399197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.410356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.410386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.421662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.421691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.433342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.433372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.445201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.445231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.456355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.456385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.467707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.467737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.479200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.479230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.831 [2024-07-26 11:18:54.490706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.831 [2024-07-26 11:18:54.490735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.096 [2024-07-26 11:18:54.502107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.096 [2024-07-26 11:18:54.502138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.096 [2024-07-26 11:18:54.514113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.096 [2024-07-26 11:18:54.514143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.096 [2024-07-26 11:18:54.526075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.096 [2024-07-26 11:18:54.526106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.096 [2024-07-26 11:18:54.537296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.096 [2024-07-26 11:18:54.537325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.096 [2024-07-26 11:18:54.548582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.096 [2024-07-26 11:18:54.548612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.560015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.560045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.571638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.571674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.583201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.583239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.594845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.594875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.606794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.606824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.618358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.618388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.630334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.630364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.642058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.642088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.653507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.653537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.665217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.665248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.676652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.676682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.688270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.688301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.699885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.097 [2024-07-26 11:18:54.699915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.097 [2024-07-26 11:18:54.711458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.098 [2024-07-26 11:18:54.711496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.098 [2024-07-26 11:18:54.723055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.098 [2024-07-26 11:18:54.723085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.098 [2024-07-26 11:18:54.734490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.098 [2024-07-26 11:18:54.734520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.098 [2024-07-26 11:18:54.746168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.098 [2024-07-26 11:18:54.746198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.757687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.757716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.769669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.769700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.781447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.781478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.793359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.793389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.804785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.804825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.816198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.816229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.828252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.828282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.840273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.840303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.852241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.852271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.863965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.863996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.875707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.875738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.887832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.887872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.899099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.899129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.911343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.911373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.923491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.923542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.935229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.935259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.947104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.947135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.958877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.958907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.970662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.970693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.982398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.982436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:54.994487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:54.994517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:55.006195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:55.006226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.361 [2024-07-26 11:18:55.017638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.361 [2024-07-26 11:18:55.017668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.029924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.029964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.042127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.042156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.053341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.053371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.065079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.065110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.076970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.077000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.088518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.088558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.100031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.100061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.111887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.111917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.123573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.123602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.135030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.135059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.146671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.146701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.158601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.158632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.170417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.170456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.182116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.182146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.193948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.193977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.205468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.205498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.217083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.217113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.228257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.228287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.240152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.240182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.252029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.252066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.263667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.263697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.620 [2024-07-26 11:18:55.275590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.620 [2024-07-26 11:18:55.275620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.878 [2024-07-26 11:18:55.287254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.878 [2024-07-26 11:18:55.287284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.878 [2024-07-26 11:18:55.298743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.878 [2024-07-26 11:18:55.298772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.310660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.310690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.322414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.322453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.334019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.334050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.346871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.346900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.356747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.356789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.369216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.369247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.381032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.381062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.392916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.392947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.404437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.404468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.416267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.416302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.428463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.428498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.440142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.440174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.452015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.452046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.463517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.463548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.475424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.475464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.486939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.486970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.498820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.498851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.511004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.511037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.522845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.522877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.879 [2024-07-26 11:18:55.534238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.879 [2024-07-26 11:18:55.534269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.137 [2024-07-26 11:18:55.545795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.137 [2024-07-26 11:18:55.545826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.137 [2024-07-26 11:18:55.557779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.137 [2024-07-26 11:18:55.557810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.137 [2024-07-26 11:18:55.569632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.137 [2024-07-26 11:18:55.569662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.137 [2024-07-26 11:18:55.581595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.137 [2024-07-26 11:18:55.581625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.593436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.593474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.604698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.604728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.616661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.616691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.628360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.628390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.639780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.639811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.651668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.651698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.663513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.663544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.675105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.675135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.686723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.686753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.698528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.698558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.710346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.710377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.722186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.722217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.734072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.734102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.745811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.745843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.757552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.757583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.769655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.769686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.781582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.781612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.138 [2024-07-26 11:18:55.793124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.138 [2024-07-26 11:18:55.793154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.804917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.804947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.816857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.816887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.828439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.828469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.840734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.840764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.852848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.852878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.864704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.864734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.875878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.875909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.887725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.887768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.899165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.899195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.911142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.911173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.922573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.922604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.934038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.934068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.396 [2024-07-26 11:18:55.945854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.396 [2024-07-26 11:18:55.945895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:55.957384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:55.957414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:55.969185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:55.969215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:55.980717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:55.980747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:55.992092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:55.992122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:56.005425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:56.005475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:56.015697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:56.015727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:56.027859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:56.027889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:56.039416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:56.039456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.397 [2024-07-26 11:18:56.051136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.397 [2024-07-26 11:18:56.051166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.062858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.062888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.074463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.074493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.085850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.085881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.097118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.097149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.108677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.108706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.120056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.120086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.131717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.131756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.142949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.142980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.154545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.154576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.165844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.165874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.177150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.177180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.188147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.188177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.199087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.199116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.211973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.212003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.223118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.223148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.234478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.234508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.245944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.245974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.257306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.257336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.270187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.270217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.281133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.281163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.291970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.291999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.303475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.303505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.655 [2024-07-26 11:18:56.315100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.655 [2024-07-26 11:18:56.315130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.326755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.326785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.339437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.339467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.349628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.349665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.361722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.361752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.373162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.373192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.384492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.384522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.395326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.395356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.406672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.406702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.419662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.419692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.430297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.430326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.441773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.441804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.453336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.453366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.465457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.465488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.477002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.477032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.488379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.488409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.499931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.499960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.511302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.511332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.522786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.522815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.534125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.534154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.545966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.545997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.557272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.557313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.914 [2024-07-26 11:18:56.568887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.914 [2024-07-26 11:18:56.568925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.580869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.580900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.592544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.592577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.604479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.604510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.615968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.615999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.627263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.627294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.638770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.638801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.172 [2024-07-26 11:18:56.650576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.172 [2024-07-26 11:18:56.650607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.662145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.662176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.673756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.673787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.684925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.684956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.696939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.696970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.709683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.709714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.721387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.721418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.733342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.733384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.745017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.745047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.756987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.757017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.768676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.768706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.780214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.780244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.792031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.792073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.803286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.803315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.814846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.814877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.173 [2024-07-26 11:18:56.826325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.173 [2024-07-26 11:18:56.826354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.837800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.837829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.849802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.849832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.861358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.861388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.873308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.873337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.885190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.885219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.896824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.896855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.908699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.908730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.920132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.920161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.931680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.931709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.944870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.944899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.955596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.955626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.431 [2024-07-26 11:18:56.967222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.431 [2024-07-26 11:18:56.967252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:56.978968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:56.978998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:56.990902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:56.990932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.002930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.002960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.014982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.015020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.026912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.026942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.038639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.038669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.050704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.050734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.062567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.062596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.074257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.074287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.432 [2024-07-26 11:18:57.085582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.432 [2024-07-26 11:18:57.085613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.097591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.097620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.109128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.109158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.121374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.121403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.135130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.135160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.145800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.145830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.158285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.158315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.169744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.169774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.181644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.181685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.194731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.194761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.205647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.205679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.217391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.217421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.228878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.228907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.240814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.690 [2024-07-26 11:18:57.240844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.690 [2024-07-26 11:18:57.252636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.252666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.263937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.263966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.276003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.276034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.287909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.287938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.299338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.299368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.311055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.311085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.322728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.322758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.334072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.334102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.691 [2024-07-26 11:18:57.345981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.691 [2024-07-26 11:18:57.346012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.357375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.357405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.369063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.369094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.380312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.380343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.392401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.392439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.404507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.404537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.416498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.416528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.428112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.428142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.439861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.439891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.451567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.451597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.463122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.463154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.479035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.479066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.490297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.490326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.502684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.502715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.514863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.514893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.527016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.527046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.539014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.539045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.551324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.551353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.563438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.563474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.577234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.577264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.588033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.588063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.949 [2024-07-26 11:18:57.600777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.949 [2024-07-26 11:18:57.600815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.612340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.612371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.623885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.623915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.635472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.635505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.647149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.647180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.659020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.659050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.670984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.671014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.682342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.682372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.694311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.694341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.705590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.705621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.717276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.717307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.728967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.728998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.740552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.740583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.752100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.752132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.764062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.764097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.776051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.776081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.787831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.787862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.800145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.800176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.813022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.813065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.824887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.824918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.836847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.836878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.850278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.850309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.208 [2024-07-26 11:18:57.861190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.208 [2024-07-26 11:18:57.861221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.872418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.872460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.885484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.885515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.896934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.896965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.909847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.909887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.922037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.922069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.934067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.934103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.946125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.946156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.957661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.957691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.968762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.968802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.980496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.980532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.466 [2024-07-26 11:18:57.991940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.466 [2024-07-26 11:18:57.991970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.003327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.003357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.014715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.014746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.026679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.026709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.038262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.038292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.049649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.049680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.055632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.055661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 00:11:02.467 Latency(us) 00:11:02.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.467 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:02.467 Nvme1n1 : 5.01 10912.56 85.25 0.00 0.00 11712.70 5267.15 21748.24 00:11:02.467 =================================================================================================================== 00:11:02.467 Total : 10912.56 85.25 0.00 0.00 11712.70 5267.15 21748.24 00:11:02.467 [2024-07-26 11:18:58.063652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.063680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.071674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.071702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.079690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.079726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.087749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.087795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.095773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.095819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.103788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.103832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.111812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.111854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.467 [2024-07-26 11:18:58.119831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.467 [2024-07-26 11:18:58.119875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.127862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.127908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.135881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.135928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.143900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.143944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.151929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.151978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.159947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.159993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.167981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.168028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.175992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.176036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.184013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.184058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.192039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.192087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.200054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.200098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.208068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.208108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.216070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.216095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.224092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.224116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.232115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.232153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.240136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.240160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.248167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.248195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.256217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.256261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.264242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.264287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.272231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.272255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.280250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.280273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.288274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.288300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.296294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.296318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.304316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.304340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.312372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.312417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.320395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.320448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.328399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.328436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.336401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.336425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.344421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.344452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 [2024-07-26 11:18:58.352493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.726 [2024-07-26 11:18:58.352519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2043419) - No such process 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2043419 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.726 delay0 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.726 11:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:02.984 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.984 [2024-07-26 11:18:58.446013] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:09.541 Initializing NVMe Controllers 00:11:09.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:09.541 Initialization complete. Launching workers. 00:11:09.541 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 7050 00:11:09.541 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7264, failed to submit 81 00:11:09.541 success 7135, unsuccessful 129, failed 0 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.541 rmmod nvme_tcp 00:11:09.541 rmmod nvme_fabrics 00:11:09.541 rmmod nvme_keyring 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2042071 ']' 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2042071 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2042071 ']' 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2042071 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.541 11:19:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2042071 00:11:09.541 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:09.541 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:09.541 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2042071' 00:11:09.541 killing process with pid 2042071 00:11:09.541 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2042071 00:11:09.541 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2042071 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.801 11:19:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.363 00:11:12.363 real 0m29.265s 00:11:12.363 user 0m41.283s 00:11:12.363 sys 0m10.232s 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.363 ************************************ 00:11:12.363 END TEST nvmf_zcopy 00:11:12.363 ************************************ 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.363 ************************************ 00:11:12.363 START TEST nvmf_nmic 00:11:12.363 ************************************ 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.363 * Looking for test storage... 00:11:12.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.363 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.364 11:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:14.899 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:14.899 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:14.899 Found net devices under 0000:84:00.0: cvl_0_0 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:14.899 Found net devices under 0000:84:00.1: cvl_0_1 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.899 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:11:14.900 00:11:14.900 --- 10.0.0.2 ping statistics --- 00:11:14.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.900 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:11:14.900 00:11:14.900 --- 10.0.0.1 ping statistics --- 00:11:14.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.900 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2046911 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2046911 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2046911 ']' 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.900 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:14.900 [2024-07-26 11:19:10.326837] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:11:14.900 [2024-07-26 11:19:10.326932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.900 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.900 [2024-07-26 11:19:10.404443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.900 [2024-07-26 11:19:10.532162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.900 [2024-07-26 11:19:10.532226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.900 [2024-07-26 11:19:10.532243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.900 [2024-07-26 11:19:10.532256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.900 [2024-07-26 11:19:10.532268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.900 [2024-07-26 11:19:10.535452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.900 [2024-07-26 11:19:10.535509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.900 [2024-07-26 11:19:10.535484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.900 [2024-07-26 11:19:10.535512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 [2024-07-26 11:19:10.702243] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 Malloc0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 [2024-07-26 11:19:10.756983] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:15.159 test case1: single bdev can't be used in multiple subsystems 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.159 [2024-07-26 11:19:10.780802] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:15.159 [2024-07-26 11:19:10.780836] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:15.159 [2024-07-26 11:19:10.780853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.159 request: 00:11:15.159 { 00:11:15.159 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:15.159 "namespace": { 00:11:15.159 "bdev_name": "Malloc0", 00:11:15.159 "no_auto_visible": false 00:11:15.159 }, 00:11:15.159 "method": "nvmf_subsystem_add_ns", 00:11:15.159 "req_id": 1 00:11:15.159 } 00:11:15.159 Got JSON-RPC error response 00:11:15.159 response: 00:11:15.159 { 00:11:15.159 "code": -32602, 00:11:15.159 "message": "Invalid parameters" 00:11:15.159 } 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:15.159 Adding namespace failed - expected result. 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:15.159 test case2: host connect to nvmf target in multiple paths 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:15.159 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.160 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.160 [2024-07-26 11:19:10.788917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:15.160 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.160 11:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.092 11:19:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:16.657 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.657 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.657 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.657 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:16.657 11:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:18.552 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:18.553 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:18.553 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.553 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:18.553 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.553 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:18.553 11:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:18.553 [global] 00:11:18.553 thread=1 00:11:18.553 invalidate=1 00:11:18.553 rw=write 00:11:18.553 time_based=1 00:11:18.553 runtime=1 00:11:18.553 ioengine=libaio 00:11:18.553 direct=1 00:11:18.553 bs=4096 00:11:18.553 iodepth=1 00:11:18.553 norandommap=0 00:11:18.553 numjobs=1 00:11:18.553 00:11:18.553 verify_dump=1 00:11:18.553 verify_backlog=512 00:11:18.553 verify_state_save=0 00:11:18.553 do_verify=1 00:11:18.553 verify=crc32c-intel 00:11:18.553 [job0] 00:11:18.553 filename=/dev/nvme0n1 00:11:18.809 Could not set queue depth (nvme0n1) 00:11:18.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.809 fio-3.35 00:11:18.809 Starting 1 thread 00:11:20.179 00:11:20.179 job0: (groupid=0, jobs=1): err= 0: pid=2047472: Fri Jul 26 11:19:15 2024 00:11:20.179 read: IOPS=540, BW=2160KiB/s (2212kB/s)(2212KiB/1024msec) 00:11:20.179 slat (nsec): min=6493, max=48309, avg=13116.97, stdev=5315.63 00:11:20.179 clat (usec): min=273, max=41091, avg=1426.98, stdev=6606.93 00:11:20.179 lat (usec): min=280, max=41106, avg=1440.10, stdev=6608.66 00:11:20.179 clat percentiles (usec): 00:11:20.179 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:11:20.179 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:11:20.179 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 367], 00:11:20.179 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:20.179 | 99.99th=[41157] 00:11:20.179 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:11:20.179 slat (nsec): min=8268, max=57303, avg=10142.86, stdev=2522.71 00:11:20.179 clat (usec): min=185, max=352, avg=206.73, stdev=17.61 00:11:20.179 lat (usec): min=195, max=410, avg=216.88, stdev=19.05 00:11:20.179 clat percentiles (usec): 00:11:20.179 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:11:20.179 | 30.00th=[ 200], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:11:20.179 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 229], 00:11:20.179 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 326], 99.95th=[ 355], 00:11:20.179 | 99.99th=[ 355] 00:11:20.180 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:11:20.180 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:20.180 lat (usec) : 250=62.78%, 500=36.27% 00:11:20.180 lat (msec) : 50=0.95% 00:11:20.180 cpu : usr=1.47%, sys=1.27%, ctx=1577, majf=0, minf=2 00:11:20.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.180 issued rwts: total=553,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.180 00:11:20.180 Run status group 0 (all jobs): 00:11:20.180 READ: bw=2160KiB/s (2212kB/s), 2160KiB/s-2160KiB/s (2212kB/s-2212kB/s), io=2212KiB (2265kB), run=1024-1024msec 00:11:20.180 WRITE: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=4096KiB (4194kB), run=1024-1024msec 00:11:20.180 00:11:20.180 Disk stats (read/write): 00:11:20.180 nvme0n1: ios=599/1024, merge=0/0, ticks=662/207, in_queue=869, util=92.08% 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.180 rmmod nvme_tcp 00:11:20.180 rmmod nvme_fabrics 00:11:20.180 rmmod nvme_keyring 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2046911 ']' 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2046911 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2046911 ']' 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2046911 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2046911 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2046911' 00:11:20.180 killing process with pid 2046911 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2046911 00:11:20.180 11:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2046911 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.746 11:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.646 00:11:22.646 real 0m10.728s 00:11:22.646 user 0m23.229s 00:11:22.646 sys 0m2.874s 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:22.646 ************************************ 00:11:22.646 END TEST nvmf_nmic 00:11:22.646 ************************************ 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.646 ************************************ 00:11:22.646 START TEST nvmf_fio_target 00:11:22.646 ************************************ 00:11:22.646 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:22.646 * Looking for test storage... 00:11:22.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.905 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.906 11:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:25.438 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:25.438 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:25.438 Found net devices under 0000:84:00.0: cvl_0_0 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.438 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:25.438 Found net devices under 0000:84:00.1: cvl_0_1 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.439 11:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.439 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.439 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.439 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.439 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.439 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.439 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:11:25.697 00:11:25.697 --- 10.0.0.2 ping statistics --- 00:11:25.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.697 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:25.697 00:11:25.697 --- 10.0.0.1 ping statistics --- 00:11:25.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.697 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2049684 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2049684 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2049684 ']' 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.697 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.697 [2024-07-26 11:19:21.208941] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:11:25.697 [2024-07-26 11:19:21.209038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.697 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.697 [2024-07-26 11:19:21.289958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.955 [2024-07-26 11:19:21.412967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.955 [2024-07-26 11:19:21.413029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.955 [2024-07-26 11:19:21.413045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.955 [2024-07-26 11:19:21.413059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.955 [2024-07-26 11:19:21.413071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.955 [2024-07-26 11:19:21.413167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.955 [2024-07-26 11:19:21.413226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.955 [2024-07-26 11:19:21.413297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.955 [2024-07-26 11:19:21.413300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.955 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.955 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:25.955 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:25.955 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.955 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.955 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.956 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:26.521 [2024-07-26 11:19:21.904282] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.521 11:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.086 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:27.086 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.344 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:27.344 11:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.602 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:27.602 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:27.859 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:27.859 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:28.424 11:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:28.682 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:28.682 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.248 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:29.248 11:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.837 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:29.837 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:30.112 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:30.370 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:30.370 11:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.627 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:30.628 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.192 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.450 [2024-07-26 11:19:26.914415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.450 11:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:32.035 11:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:32.607 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.173 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:33.173 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:33.174 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.174 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:33.174 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:33.174 11:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:35.072 11:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:35.330 [global] 00:11:35.330 thread=1 00:11:35.330 invalidate=1 00:11:35.330 rw=write 00:11:35.330 time_based=1 00:11:35.330 runtime=1 00:11:35.330 ioengine=libaio 00:11:35.330 direct=1 00:11:35.330 bs=4096 00:11:35.330 iodepth=1 00:11:35.330 norandommap=0 00:11:35.330 numjobs=1 00:11:35.330 00:11:35.330 verify_dump=1 00:11:35.330 verify_backlog=512 00:11:35.330 verify_state_save=0 00:11:35.330 do_verify=1 00:11:35.330 verify=crc32c-intel 00:11:35.330 [job0] 00:11:35.330 filename=/dev/nvme0n1 00:11:35.330 [job1] 00:11:35.330 filename=/dev/nvme0n2 00:11:35.330 [job2] 00:11:35.330 filename=/dev/nvme0n3 00:11:35.330 [job3] 00:11:35.330 filename=/dev/nvme0n4 00:11:35.330 Could not set queue depth (nvme0n1) 00:11:35.330 Could not set queue depth (nvme0n2) 00:11:35.330 Could not set queue depth (nvme0n3) 00:11:35.330 Could not set queue depth (nvme0n4) 00:11:35.330 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.330 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.330 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.330 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:35.330 fio-3.35 00:11:35.330 Starting 4 threads 00:11:36.703 00:11:36.703 job0: (groupid=0, jobs=1): err= 0: pid=2050911: Fri Jul 26 11:19:32 2024 00:11:36.703 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:11:36.703 slat (nsec): min=8368, max=17232, avg=14114.20, stdev=1722.35 00:11:36.703 clat (usec): min=40953, max=41221, avg=40995.97, stdev=56.50 00:11:36.703 lat (usec): min=40966, max=41230, avg=41010.09, stdev=55.09 00:11:36.703 clat percentiles (usec): 00:11:36.703 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:36.703 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:36.703 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:36.703 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:36.703 | 99.99th=[41157] 00:11:36.703 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:36.703 slat (usec): min=8, max=40594, avg=97.43, stdev=1797.17 00:11:36.703 clat (usec): min=187, max=368, avg=254.39, stdev=34.51 00:11:36.703 lat (usec): min=197, max=40919, avg=351.81, stdev=1800.92 00:11:36.703 clat percentiles (usec): 00:11:36.703 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:11:36.703 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:11:36.703 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:11:36.703 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 367], 99.95th=[ 367], 00:11:36.703 | 99.99th=[ 367] 00:11:36.703 bw ( KiB/s): min= 4096, max= 4096, per=23.75%, avg=4096.00, stdev= 0.00, samples=1 00:11:36.703 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:36.703 lat (usec) : 250=50.00%, 500=46.24% 00:11:36.703 lat (msec) : 50=3.76% 00:11:36.703 cpu : usr=0.30%, sys=1.00%, ctx=535, majf=0, minf=1 00:11:36.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.703 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.703 job1: (groupid=0, jobs=1): err= 0: pid=2050912: Fri Jul 26 11:19:32 2024 00:11:36.703 read: IOPS=1429, BW=5718KiB/s (5856kB/s)(5724KiB/1001msec) 00:11:36.703 slat (nsec): min=6717, max=33411, avg=8408.76, stdev=2465.75 00:11:36.704 clat (usec): min=299, max=1595, avg=408.33, stdev=63.38 00:11:36.704 lat (usec): min=308, max=1603, avg=416.74, stdev=63.98 00:11:36.704 clat percentiles (usec): 00:11:36.704 | 1.00th=[ 314], 5.00th=[ 338], 10.00th=[ 359], 20.00th=[ 379], 00:11:36.704 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 408], 00:11:36.704 | 70.00th=[ 412], 80.00th=[ 420], 90.00th=[ 453], 95.00th=[ 498], 00:11:36.704 | 99.00th=[ 652], 99.50th=[ 676], 99.90th=[ 971], 99.95th=[ 1598], 00:11:36.704 | 99.99th=[ 1598] 00:11:36.704 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:36.704 slat (usec): min=8, max=134, avg=12.41, stdev= 4.93 00:11:36.704 clat (usec): min=178, max=900, avg=244.00, stdev=39.49 00:11:36.704 lat (usec): min=188, max=919, avg=256.42, stdev=40.55 00:11:36.704 clat percentiles (usec): 00:11:36.704 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:11:36.704 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 247], 00:11:36.704 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 314], 00:11:36.704 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 392], 99.95th=[ 898], 00:11:36.704 | 99.99th=[ 898] 00:11:36.704 bw ( KiB/s): min= 8192, max= 8192, per=47.51%, avg=8192.00, stdev= 0.00, samples=1 00:11:36.704 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:36.704 lat (usec) : 250=32.29%, 500=65.35%, 750=2.16%, 1000=0.17% 00:11:36.704 lat (msec) : 2=0.03% 00:11:36.704 cpu : usr=1.90%, sys=4.60%, ctx=2969, majf=0, minf=1 00:11:36.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.704 issued rwts: total=1431,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.704 job2: (groupid=0, jobs=1): err= 0: pid=2050913: Fri Jul 26 11:19:32 2024 00:11:36.704 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:36.704 slat (nsec): min=6805, max=41420, avg=9302.09, stdev=3541.62 00:11:36.704 clat (usec): min=280, max=551, avg=335.70, stdev=36.20 00:11:36.704 lat (usec): min=287, max=558, avg=345.01, stdev=36.79 00:11:36.704 clat percentiles (usec): 00:11:36.704 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 314], 00:11:36.704 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 334], 00:11:36.704 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 412], 00:11:36.704 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 553], 00:11:36.704 | 99.99th=[ 553] 00:11:36.704 write: IOPS=1762, BW=7049KiB/s (7218kB/s)(7056KiB/1001msec); 0 zone resets 00:11:36.704 slat (nsec): min=9013, max=44261, avg=12728.31, stdev=3992.27 00:11:36.704 clat (usec): min=190, max=939, avg=247.23, stdev=52.93 00:11:36.704 lat (usec): min=203, max=951, avg=259.96, stdev=53.99 00:11:36.704 clat percentiles (usec): 00:11:36.704 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:11:36.704 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:11:36.704 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 347], 00:11:36.704 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 922], 99.95th=[ 938], 00:11:36.704 | 99.99th=[ 938] 00:11:36.704 bw ( KiB/s): min= 8192, max= 8192, per=47.51%, avg=8192.00, stdev= 0.00, samples=1 00:11:36.704 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:36.704 lat (usec) : 250=38.91%, 500=60.70%, 750=0.33%, 1000=0.06% 00:11:36.704 cpu : usr=2.30%, sys=5.40%, ctx=3301, majf=0, minf=1 00:11:36.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.704 issued rwts: total=1536,1764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.704 job3: (groupid=0, jobs=1): err= 0: pid=2050914: Fri Jul 26 11:19:32 2024 00:11:36.704 read: IOPS=334, BW=1339KiB/s (1371kB/s)(1340KiB/1001msec) 00:11:36.704 slat (nsec): min=7696, max=30573, avg=11393.12, stdev=3523.04 00:11:36.704 clat (usec): min=318, max=42035, avg=2530.23, stdev=8957.99 00:11:36.704 lat (usec): min=328, max=42049, avg=2541.62, stdev=8958.04 00:11:36.704 clat percentiles (usec): 00:11:36.704 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 379], 00:11:36.704 | 30.00th=[ 396], 40.00th=[ 416], 50.00th=[ 445], 60.00th=[ 465], 00:11:36.704 | 70.00th=[ 482], 80.00th=[ 515], 90.00th=[ 562], 95.00th=[40633], 00:11:36.704 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:36.704 | 99.99th=[42206] 00:11:36.704 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:36.704 slat (nsec): min=9323, max=34939, avg=12649.57, stdev=3496.95 00:11:36.704 clat (usec): min=199, max=502, avg=272.73, stdev=37.92 00:11:36.704 lat (usec): min=212, max=513, avg=285.38, stdev=38.19 00:11:36.704 clat percentiles (usec): 00:11:36.704 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:11:36.704 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:11:36.704 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 343], 00:11:36.704 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 502], 99.95th=[ 502], 00:11:36.704 | 99.99th=[ 502] 00:11:36.704 bw ( KiB/s): min= 4096, max= 4096, per=23.75%, avg=4096.00, stdev= 0.00, samples=1 00:11:36.704 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:36.704 lat (usec) : 250=16.88%, 500=73.55%, 750=7.44% 00:11:36.704 lat (msec) : 10=0.12%, 50=2.01% 00:11:36.704 cpu : usr=0.50%, sys=1.20%, ctx=847, majf=0, minf=2 00:11:36.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.704 issued rwts: total=335,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.704 00:11:36.704 Run status group 0 (all jobs): 00:11:36.704 READ: bw=12.9MiB/s (13.6MB/s), 79.8KiB/s-6138KiB/s (81.7kB/s-6285kB/s), io=13.0MiB (13.6MB), run=1001-1003msec 00:11:36.704 WRITE: bw=16.8MiB/s (17.7MB/s), 2042KiB/s-7049KiB/s (2091kB/s-7218kB/s), io=16.9MiB (17.7MB), run=1001-1003msec 00:11:36.704 00:11:36.704 Disk stats (read/write): 00:11:36.704 nvme0n1: ios=39/512, merge=0/0, ticks=1478/124, in_queue=1602, util=87.17% 00:11:36.704 nvme0n2: ios=1091/1536, merge=0/0, ticks=1030/364, in_queue=1394, util=87.63% 00:11:36.704 nvme0n3: ios=1272/1536, merge=0/0, ticks=1301/367, in_queue=1668, util=91.49% 00:11:36.704 nvme0n4: ios=83/512, merge=0/0, ticks=763/136, in_queue=899, util=96.13% 00:11:36.704 11:19:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:36.704 [global] 00:11:36.704 thread=1 00:11:36.704 invalidate=1 00:11:36.704 rw=randwrite 00:11:36.704 time_based=1 00:11:36.704 runtime=1 00:11:36.704 ioengine=libaio 00:11:36.704 direct=1 00:11:36.704 bs=4096 00:11:36.704 iodepth=1 00:11:36.704 norandommap=0 00:11:36.704 numjobs=1 00:11:36.704 00:11:36.704 verify_dump=1 00:11:36.704 verify_backlog=512 00:11:36.704 verify_state_save=0 00:11:36.704 do_verify=1 00:11:36.704 verify=crc32c-intel 00:11:36.704 [job0] 00:11:36.704 filename=/dev/nvme0n1 00:11:36.704 [job1] 00:11:36.704 filename=/dev/nvme0n2 00:11:36.704 [job2] 00:11:36.704 filename=/dev/nvme0n3 00:11:36.704 [job3] 00:11:36.704 filename=/dev/nvme0n4 00:11:36.704 Could not set queue depth (nvme0n1) 00:11:36.704 Could not set queue depth (nvme0n2) 00:11:36.704 Could not set queue depth (nvme0n3) 00:11:36.704 Could not set queue depth (nvme0n4) 00:11:36.962 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.962 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.962 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.962 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.962 fio-3.35 00:11:36.962 Starting 4 threads 00:11:38.338 00:11:38.338 job0: (groupid=0, jobs=1): err= 0: pid=2051262: Fri Jul 26 11:19:33 2024 00:11:38.338 read: IOPS=1247, BW=4988KiB/s (5108kB/s)(5168KiB/1036msec) 00:11:38.338 slat (nsec): min=5695, max=40428, avg=10158.79, stdev=4145.67 00:11:38.338 clat (usec): min=289, max=41010, avg=496.15, stdev=2252.39 00:11:38.338 lat (usec): min=297, max=41027, avg=506.31, stdev=2252.76 00:11:38.338 clat percentiles (usec): 00:11:38.338 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:11:38.338 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 367], 00:11:38.338 | 70.00th=[ 379], 80.00th=[ 396], 90.00th=[ 445], 95.00th=[ 461], 00:11:38.338 | 99.00th=[ 515], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:11:38.338 | 99.99th=[41157] 00:11:38.338 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:11:38.338 slat (nsec): min=6370, max=29875, avg=10380.21, stdev=2676.15 00:11:38.338 clat (usec): min=173, max=437, avg=231.86, stdev=46.05 00:11:38.338 lat (usec): min=180, max=450, avg=242.24, stdev=47.12 00:11:38.338 clat percentiles (usec): 00:11:38.338 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:11:38.338 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 225], 00:11:38.338 | 70.00th=[ 241], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 334], 00:11:38.338 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 433], 99.95th=[ 437], 00:11:38.338 | 99.99th=[ 437] 00:11:38.339 bw ( KiB/s): min= 4096, max= 8192, per=38.93%, avg=6144.00, stdev=2896.31, samples=2 00:11:38.339 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:38.339 lat (usec) : 250=40.52%, 500=58.95%, 750=0.35%, 1000=0.04% 00:11:38.339 lat (msec) : 50=0.14% 00:11:38.339 cpu : usr=2.32%, sys=2.80%, ctx=2828, majf=0, minf=2 00:11:38.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 issued rwts: total=1292,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.339 job1: (groupid=0, jobs=1): err= 0: pid=2051263: Fri Jul 26 11:19:33 2024 00:11:38.339 read: IOPS=46, BW=185KiB/s (190kB/s)(192KiB/1037msec) 00:11:38.339 slat (nsec): min=6294, max=35042, avg=12890.71, stdev=6681.67 00:11:38.339 clat (usec): min=347, max=41971, avg=18848.16, stdev=20295.89 00:11:38.339 lat (usec): min=354, max=41993, avg=18861.05, stdev=20298.94 00:11:38.339 clat percentiles (usec): 00:11:38.339 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 371], 20.00th=[ 412], 00:11:38.339 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 537], 60.00th=[41157], 00:11:38.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:38.339 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:38.339 | 99.99th=[42206] 00:11:38.339 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:11:38.339 slat (nsec): min=7209, max=50204, avg=11668.94, stdev=7209.34 00:11:38.339 clat (usec): min=192, max=401, avg=241.74, stdev=35.20 00:11:38.339 lat (usec): min=199, max=449, avg=253.41, stdev=35.19 00:11:38.339 clat percentiles (usec): 00:11:38.339 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:11:38.339 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:11:38.339 | 70.00th=[ 251], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 310], 00:11:38.339 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 400], 99.95th=[ 400], 00:11:38.339 | 99.99th=[ 400] 00:11:38.339 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:11:38.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:38.339 lat (usec) : 250=63.04%, 500=32.50%, 750=0.54% 00:11:38.339 lat (msec) : 50=3.93% 00:11:38.339 cpu : usr=0.10%, sys=0.77%, ctx=561, majf=0, minf=1 00:11:38.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.339 job2: (groupid=0, jobs=1): err= 0: pid=2051266: Fri Jul 26 11:19:33 2024 00:11:38.339 read: IOPS=27, BW=110KiB/s (113kB/s)(112KiB/1019msec) 00:11:38.339 slat (nsec): min=8350, max=25693, avg=15728.29, stdev=3182.16 00:11:38.339 clat (usec): min=345, max=41977, avg=30875.62, stdev=17931.86 00:11:38.339 lat (usec): min=361, max=41993, avg=30891.34, stdev=17933.50 00:11:38.339 clat percentiles (usec): 00:11:38.339 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 392], 00:11:38.339 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:38.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:38.339 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:38.339 | 99.99th=[42206] 00:11:38.339 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:11:38.339 slat (nsec): min=9223, max=96633, avg=14035.31, stdev=6608.99 00:11:38.339 clat (usec): min=183, max=475, avg=282.43, stdev=48.16 00:11:38.339 lat (usec): min=220, max=519, avg=296.47, stdev=50.18 00:11:38.339 clat percentiles (usec): 00:11:38.339 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 241], 00:11:38.339 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 277], 60.00th=[ 293], 00:11:38.339 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 347], 95.00th=[ 379], 00:11:38.339 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 478], 00:11:38.339 | 99.99th=[ 478] 00:11:38.339 bw ( KiB/s): min= 4096, max= 4096, per=25.95%, avg=4096.00, stdev= 0.00, samples=1 00:11:38.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:38.339 lat (usec) : 250=32.41%, 500=63.70% 00:11:38.339 lat (msec) : 50=3.89% 00:11:38.339 cpu : usr=0.49%, sys=0.79%, ctx=541, majf=0, minf=1 00:11:38.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.339 job3: (groupid=0, jobs=1): err= 0: pid=2051267: Fri Jul 26 11:19:33 2024 00:11:38.339 read: IOPS=992, BW=3969KiB/s (4064kB/s)(4120KiB/1038msec) 00:11:38.339 slat (nsec): min=7247, max=23462, avg=9626.64, stdev=1809.36 00:11:38.339 clat (usec): min=278, max=43908, avg=569.27, stdev=3132.27 00:11:38.339 lat (usec): min=287, max=43931, avg=578.90, stdev=3132.88 00:11:38.339 clat percentiles (usec): 00:11:38.339 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:11:38.339 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:11:38.339 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 375], 00:11:38.339 | 99.00th=[ 486], 99.50th=[40633], 99.90th=[41157], 99.95th=[43779], 00:11:38.339 | 99.99th=[43779] 00:11:38.339 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(6144KiB/1038msec); 0 zone resets 00:11:38.339 slat (usec): min=9, max=40473, avg=38.47, stdev=1032.39 00:11:38.339 clat (usec): min=196, max=851, avg=243.81, stdev=39.78 00:11:38.339 lat (usec): min=206, max=40710, avg=282.28, stdev=1033.01 00:11:38.339 clat percentiles (usec): 00:11:38.339 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:11:38.339 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 241], 00:11:38.339 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 297], 00:11:38.339 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 791], 99.95th=[ 857], 00:11:38.339 | 99.99th=[ 857] 00:11:38.339 bw ( KiB/s): min= 4096, max= 8192, per=38.93%, avg=6144.00, stdev=2896.31, samples=2 00:11:38.339 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:38.339 lat (usec) : 250=42.75%, 500=56.70%, 750=0.23%, 1000=0.08% 00:11:38.339 lat (msec) : 50=0.23% 00:11:38.339 cpu : usr=2.41%, sys=2.99%, ctx=2569, majf=0, minf=1 00:11:38.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.339 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:38.339 00:11:38.339 Run status group 0 (all jobs): 00:11:38.339 READ: bw=9241KiB/s (9463kB/s), 110KiB/s-4988KiB/s (113kB/s-5108kB/s), io=9592KiB (9822kB), run=1019-1038msec 00:11:38.339 WRITE: bw=15.4MiB/s (16.2MB/s), 1975KiB/s-5931KiB/s (2022kB/s-6073kB/s), io=16.0MiB (16.8MB), run=1019-1038msec 00:11:38.339 00:11:38.339 Disk stats (read/write): 00:11:38.339 nvme0n1: ios=1247/1536, merge=0/0, ticks=490/347, in_queue=837, util=85.87% 00:11:38.339 nvme0n2: ios=85/512, merge=0/0, ticks=1616/122, in_queue=1738, util=92.04% 00:11:38.339 nvme0n3: ios=80/512, merge=0/0, ticks=752/136, in_queue=888, util=93.96% 00:11:38.339 nvme0n4: ios=1072/1536, merge=0/0, ticks=1135/368, in_queue=1503, util=97.32% 00:11:38.339 11:19:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:38.339 [global] 00:11:38.339 thread=1 00:11:38.339 invalidate=1 00:11:38.339 rw=write 00:11:38.339 time_based=1 00:11:38.339 runtime=1 00:11:38.339 ioengine=libaio 00:11:38.339 direct=1 00:11:38.339 bs=4096 00:11:38.339 iodepth=128 00:11:38.339 norandommap=0 00:11:38.339 numjobs=1 00:11:38.339 00:11:38.339 verify_dump=1 00:11:38.339 verify_backlog=512 00:11:38.339 verify_state_save=0 00:11:38.339 do_verify=1 00:11:38.339 verify=crc32c-intel 00:11:38.339 [job0] 00:11:38.339 filename=/dev/nvme0n1 00:11:38.339 [job1] 00:11:38.339 filename=/dev/nvme0n2 00:11:38.339 [job2] 00:11:38.339 filename=/dev/nvme0n3 00:11:38.339 [job3] 00:11:38.339 filename=/dev/nvme0n4 00:11:38.339 Could not set queue depth (nvme0n1) 00:11:38.339 Could not set queue depth (nvme0n2) 00:11:38.339 Could not set queue depth (nvme0n3) 00:11:38.339 Could not set queue depth (nvme0n4) 00:11:38.339 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.339 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.339 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.339 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:38.339 fio-3.35 00:11:38.339 Starting 4 threads 00:11:39.715 00:11:39.715 job0: (groupid=0, jobs=1): err= 0: pid=2051491: Fri Jul 26 11:19:35 2024 00:11:39.715 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:11:39.715 slat (usec): min=2, max=18970, avg=99.89, stdev=615.65 00:11:39.715 clat (usec): min=5920, max=34233, avg=13175.27, stdev=4048.50 00:11:39.715 lat (usec): min=6075, max=34240, avg=13275.16, stdev=4070.82 00:11:39.715 clat percentiles (usec): 00:11:39.715 | 1.00th=[ 7767], 5.00th=[10159], 10.00th=[10814], 20.00th=[11731], 00:11:39.715 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:11:39.715 | 70.00th=[12518], 80.00th=[13698], 90.00th=[14615], 95.00th=[25560], 00:11:39.715 | 99.00th=[29492], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:11:39.715 | 99.99th=[34341] 00:11:39.715 write: IOPS=5088, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1004msec); 0 zone resets 00:11:39.715 slat (usec): min=4, max=16438, avg=98.71, stdev=608.57 00:11:39.715 clat (usec): min=2446, max=35915, avg=12836.89, stdev=4431.22 00:11:39.715 lat (usec): min=2458, max=38662, avg=12935.60, stdev=4452.83 00:11:39.715 clat percentiles (usec): 00:11:39.715 | 1.00th=[ 5932], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10290], 00:11:39.715 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:11:39.715 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14222], 95.00th=[23462], 00:11:39.715 | 99.00th=[33162], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:11:39.715 | 99.99th=[35914] 00:11:39.715 bw ( KiB/s): min=17520, max=22336, per=33.44%, avg=19928.00, stdev=3405.43, samples=2 00:11:39.715 iops : min= 4380, max= 5584, avg=4982.00, stdev=851.36, samples=2 00:11:39.715 lat (msec) : 4=0.52%, 10=10.52%, 20=81.53%, 50=7.43% 00:11:39.715 cpu : usr=3.99%, sys=7.58%, ctx=457, majf=0, minf=9 00:11:39.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:39.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.715 issued rwts: total=4608,5109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.715 job1: (groupid=0, jobs=1): err= 0: pid=2051492: Fri Jul 26 11:19:35 2024 00:11:39.715 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:11:39.715 slat (usec): min=3, max=11695, avg=120.47, stdev=906.75 00:11:39.715 clat (usec): min=7709, max=43168, avg=15435.62, stdev=4055.12 00:11:39.715 lat (usec): min=7715, max=43174, avg=15556.09, stdev=4152.49 00:11:39.715 clat percentiles (usec): 00:11:39.715 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11863], 20.00th=[12780], 00:11:39.715 | 30.00th=[13173], 40.00th=[14091], 50.00th=[15270], 60.00th=[15533], 00:11:39.715 | 70.00th=[15926], 80.00th=[16712], 90.00th=[18744], 95.00th=[23462], 00:11:39.715 | 99.00th=[35914], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:11:39.715 | 99.99th=[43254] 00:11:39.715 write: IOPS=3705, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1012msec); 0 zone resets 00:11:39.715 slat (usec): min=5, max=18598, avg=132.59, stdev=924.05 00:11:39.715 clat (usec): min=1756, max=50230, avg=19459.62, stdev=10941.37 00:11:39.715 lat (usec): min=1765, max=50239, avg=19592.20, stdev=11031.44 00:11:39.715 clat percentiles (usec): 00:11:39.715 | 1.00th=[ 2966], 5.00th=[ 7046], 10.00th=[10028], 20.00th=[10945], 00:11:39.715 | 30.00th=[11863], 40.00th=[13829], 50.00th=[15401], 60.00th=[19268], 00:11:39.715 | 70.00th=[20841], 80.00th=[29230], 90.00th=[38011], 95.00th=[43254], 00:11:39.715 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:11:39.715 | 99.99th=[50070] 00:11:39.715 bw ( KiB/s): min=13144, max=15840, per=24.31%, avg=14492.00, stdev=1906.36, samples=2 00:11:39.715 iops : min= 3286, max= 3960, avg=3623.00, stdev=476.59, samples=2 00:11:39.715 lat (msec) : 2=0.22%, 4=0.55%, 10=4.88%, 20=73.08%, 50=21.18% 00:11:39.715 lat (msec) : 100=0.10% 00:11:39.715 cpu : usr=3.36%, sys=4.85%, ctx=263, majf=0, minf=9 00:11:39.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:39.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.715 issued rwts: total=3584,3750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.715 job2: (groupid=0, jobs=1): err= 0: pid=2051494: Fri Jul 26 11:19:35 2024 00:11:39.715 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:39.715 slat (usec): min=2, max=33527, avg=169.40, stdev=1321.47 00:11:39.715 clat (msec): min=6, max=112, avg=21.86, stdev=19.14 00:11:39.715 lat (msec): min=6, max=112, avg=22.02, stdev=19.28 00:11:39.715 clat percentiles (msec): 00:11:39.715 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 14], 00:11:39.715 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:11:39.715 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 57], 95.00th=[ 77], 00:11:39.715 | 99.00th=[ 89], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 110], 00:11:39.715 | 99.99th=[ 112] 00:11:39.715 write: IOPS=3595, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1004msec); 0 zone resets 00:11:39.715 slat (usec): min=4, max=4230, avg=101.61, stdev=537.81 00:11:39.715 clat (usec): min=3157, max=39048, avg=13453.50, stdev=2954.70 00:11:39.715 lat (usec): min=6594, max=39056, avg=13555.11, stdev=2968.89 00:11:39.715 clat percentiles (usec): 00:11:39.715 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[12387], 00:11:39.715 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13435], 60.00th=[13566], 00:11:39.715 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14877], 95.00th=[15926], 00:11:39.715 | 99.00th=[17695], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:11:39.715 | 99.99th=[39060] 00:11:39.715 bw ( KiB/s): min=12288, max=16384, per=24.05%, avg=14336.00, stdev=2896.31, samples=2 00:11:39.715 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:39.715 lat (msec) : 4=0.01%, 10=4.23%, 20=86.38%, 50=3.89%, 100=5.24% 00:11:39.715 lat (msec) : 250=0.25% 00:11:39.715 cpu : usr=3.39%, sys=5.38%, ctx=330, majf=0, minf=19 00:11:39.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:39.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.715 issued rwts: total=3584,3610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.715 job3: (groupid=0, jobs=1): err= 0: pid=2051495: Fri Jul 26 11:19:35 2024 00:11:39.715 read: IOPS=2510, BW=9.80MiB/s (10.3MB/s)(10.2MiB/1043msec) 00:11:39.715 slat (usec): min=3, max=37481, avg=182.58, stdev=1453.56 00:11:39.715 clat (msec): min=8, max=104, avg=24.48, stdev=18.66 00:11:39.715 lat (msec): min=8, max=104, avg=24.66, stdev=18.80 00:11:39.715 clat percentiles (msec): 00:11:39.715 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 13], 00:11:39.715 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 21], 00:11:39.715 | 70.00th=[ 24], 80.00th=[ 26], 90.00th=[ 57], 95.00th=[ 64], 00:11:39.715 | 99.00th=[ 90], 99.50th=[ 91], 99.90th=[ 99], 99.95th=[ 99], 00:11:39.715 | 99.99th=[ 105] 00:11:39.715 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:11:39.715 slat (usec): min=4, max=25049, avg=163.72, stdev=1240.98 00:11:39.715 clat (usec): min=6934, max=76928, avg=21911.47, stdev=13220.15 00:11:39.715 lat (usec): min=8633, max=76963, avg=22075.19, stdev=13332.91 00:11:39.715 clat percentiles (usec): 00:11:39.715 | 1.00th=[10552], 5.00th=[12125], 10.00th=[12780], 20.00th=[13173], 00:11:39.715 | 30.00th=[13435], 40.00th=[13960], 50.00th=[16909], 60.00th=[17433], 00:11:39.715 | 70.00th=[19792], 80.00th=[30802], 90.00th=[43254], 95.00th=[55313], 00:11:39.715 | 99.00th=[62129], 99.50th=[68682], 99.90th=[68682], 99.95th=[71828], 00:11:39.715 | 99.99th=[77071] 00:11:39.715 bw ( KiB/s): min=11736, max=12288, per=20.15%, avg=12012.00, stdev=390.32, samples=2 00:11:39.715 iops : min= 2934, max= 3072, avg=3003.00, stdev=97.58, samples=2 00:11:39.715 lat (msec) : 10=3.32%, 20=60.81%, 50=26.26%, 100=9.60%, 250=0.02% 00:11:39.715 cpu : usr=2.50%, sys=3.65%, ctx=224, majf=0, minf=13 00:11:39.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:39.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.715 issued rwts: total=2618,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.715 00:11:39.715 Run status group 0 (all jobs): 00:11:39.715 READ: bw=53.9MiB/s (56.5MB/s), 9.80MiB/s-17.9MiB/s (10.3MB/s-18.8MB/s), io=56.2MiB (59.0MB), run=1004-1043msec 00:11:39.715 WRITE: bw=58.2MiB/s (61.0MB/s), 11.5MiB/s-19.9MiB/s (12.1MB/s-20.8MB/s), io=60.7MiB (63.7MB), run=1004-1043msec 00:11:39.715 00:11:39.715 Disk stats (read/write): 00:11:39.715 nvme0n1: ios=3867/4096, merge=0/0, ticks=17498/20139, in_queue=37637, util=99.70% 00:11:39.715 nvme0n2: ios=2687/3072, merge=0/0, ticks=41620/61797, in_queue=103417, util=99.49% 00:11:39.715 nvme0n3: ios=3292/3584, merge=0/0, ticks=20519/14802, in_queue=35321, util=97.05% 00:11:39.715 nvme0n4: ios=2066/2111, merge=0/0, ticks=20953/19512, in_queue=40465, util=99.79% 00:11:39.715 11:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:39.715 [global] 00:11:39.715 thread=1 00:11:39.715 invalidate=1 00:11:39.715 rw=randwrite 00:11:39.715 time_based=1 00:11:39.715 runtime=1 00:11:39.715 ioengine=libaio 00:11:39.715 direct=1 00:11:39.715 bs=4096 00:11:39.715 iodepth=128 00:11:39.715 norandommap=0 00:11:39.715 numjobs=1 00:11:39.715 00:11:39.715 verify_dump=1 00:11:39.715 verify_backlog=512 00:11:39.715 verify_state_save=0 00:11:39.715 do_verify=1 00:11:39.715 verify=crc32c-intel 00:11:39.715 [job0] 00:11:39.716 filename=/dev/nvme0n1 00:11:39.716 [job1] 00:11:39.716 filename=/dev/nvme0n2 00:11:39.716 [job2] 00:11:39.716 filename=/dev/nvme0n3 00:11:39.716 [job3] 00:11:39.716 filename=/dev/nvme0n4 00:11:39.716 Could not set queue depth (nvme0n1) 00:11:39.716 Could not set queue depth (nvme0n2) 00:11:39.716 Could not set queue depth (nvme0n3) 00:11:39.716 Could not set queue depth (nvme0n4) 00:11:39.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.973 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.973 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.974 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.974 fio-3.35 00:11:39.974 Starting 4 threads 00:11:41.348 00:11:41.348 job0: (groupid=0, jobs=1): err= 0: pid=2051725: Fri Jul 26 11:19:36 2024 00:11:41.348 read: IOPS=4936, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec) 00:11:41.348 slat (usec): min=4, max=4087, avg=92.24, stdev=478.90 00:11:41.348 clat (usec): min=528, max=17202, avg=12294.73, stdev=1543.83 00:11:41.348 lat (usec): min=3785, max=17217, avg=12386.97, stdev=1526.55 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11338], 00:11:41.348 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:11:41.348 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:11:41.348 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16581], 99.95th=[16909], 00:11:41.348 | 99.99th=[17171] 00:11:41.348 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:41.348 slat (usec): min=5, max=16847, avg=98.50, stdev=571.68 00:11:41.348 clat (usec): min=7922, max=28337, avg=12770.23, stdev=2607.60 00:11:41.348 lat (usec): min=8050, max=28896, avg=12868.73, stdev=2624.13 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[11207], 20.00th=[11469], 00:11:41.348 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:11:41.348 | 70.00th=[12911], 80.00th=[13042], 90.00th=[14484], 95.00th=[16909], 00:11:41.348 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:11:41.348 | 99.99th=[28443] 00:11:41.348 bw ( KiB/s): min=20480, max=20480, per=30.70%, avg=20480.00, stdev= 0.00, samples=2 00:11:41.348 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:41.348 lat (usec) : 750=0.01% 00:11:41.348 lat (msec) : 4=0.17%, 10=5.30%, 20=92.91%, 50=1.62% 00:11:41.348 cpu : usr=5.29%, sys=7.49%, ctx=404, majf=0, minf=1 00:11:41.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:41.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.348 issued rwts: total=4946,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.348 job1: (groupid=0, jobs=1): err= 0: pid=2051726: Fri Jul 26 11:19:36 2024 00:11:41.348 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:11:41.348 slat (usec): min=3, max=19567, avg=185.77, stdev=1143.90 00:11:41.348 clat (usec): min=10327, max=69417, avg=23752.64, stdev=14464.96 00:11:41.348 lat (usec): min=10975, max=69436, avg=23938.41, stdev=14548.46 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[11207], 5.00th=[11994], 10.00th=[13566], 20.00th=[13960], 00:11:41.348 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15139], 60.00th=[18482], 00:11:41.348 | 70.00th=[25560], 80.00th=[37487], 90.00th=[49021], 95.00th=[53740], 00:11:41.348 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[66323], 00:11:41.348 | 99.99th=[69731] 00:11:41.348 write: IOPS=3181, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1006msec); 0 zone resets 00:11:41.348 slat (usec): min=4, max=15169, avg=127.41, stdev=771.65 00:11:41.348 clat (usec): min=618, max=60144, avg=16821.41, stdev=8075.67 00:11:41.348 lat (usec): min=6701, max=60150, avg=16948.82, stdev=8097.63 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[ 7701], 5.00th=[10814], 10.00th=[11207], 20.00th=[12387], 00:11:41.348 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:11:41.348 | 70.00th=[15008], 80.00th=[18220], 90.00th=[31327], 95.00th=[37487], 00:11:41.348 | 99.00th=[45876], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:11:41.348 | 99.99th=[60031] 00:11:41.348 bw ( KiB/s): min= 8200, max=16384, per=18.43%, avg=12292.00, stdev=5786.96, samples=2 00:11:41.348 iops : min= 2050, max= 4096, avg=3073.00, stdev=1446.74, samples=2 00:11:41.348 lat (usec) : 750=0.02% 00:11:41.348 lat (msec) : 10=0.96%, 20=70.59%, 50=23.63%, 100=4.81% 00:11:41.348 cpu : usr=2.89%, sys=5.07%, ctx=271, majf=0, minf=1 00:11:41.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:41.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.348 issued rwts: total=3072,3201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.348 job2: (groupid=0, jobs=1): err= 0: pid=2051731: Fri Jul 26 11:19:36 2024 00:11:41.348 read: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1006msec) 00:11:41.348 slat (usec): min=2, max=19002, avg=137.63, stdev=1024.70 00:11:41.348 clat (usec): min=1587, max=36970, avg=17522.87, stdev=4803.98 00:11:41.348 lat (usec): min=5209, max=36979, avg=17660.50, stdev=4860.75 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[ 5800], 5.00th=[11600], 10.00th=[13566], 20.00th=[14091], 00:11:41.348 | 30.00th=[14615], 40.00th=[15139], 50.00th=[16909], 60.00th=[17957], 00:11:41.348 | 70.00th=[19268], 80.00th=[20579], 90.00th=[23200], 95.00th=[25822], 00:11:41.348 | 99.00th=[31327], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:11:41.348 | 99.99th=[36963] 00:11:41.348 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:11:41.348 slat (usec): min=4, max=12287, avg=112.16, stdev=802.88 00:11:41.348 clat (usec): min=795, max=58661, avg=15496.21, stdev=5569.30 00:11:41.348 lat (usec): min=807, max=58668, avg=15608.36, stdev=5618.62 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[ 4948], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[11863], 00:11:41.348 | 30.00th=[13304], 40.00th=[14353], 50.00th=[14877], 60.00th=[15401], 00:11:41.348 | 70.00th=[15664], 80.00th=[17957], 90.00th=[22676], 95.00th=[26084], 00:11:41.348 | 99.00th=[32113], 99.50th=[42730], 99.90th=[53740], 99.95th=[53740], 00:11:41.348 | 99.99th=[58459] 00:11:41.348 bw ( KiB/s): min=15080, max=17176, per=24.18%, avg=16128.00, stdev=1482.10, samples=2 00:11:41.348 iops : min= 3770, max= 4294, avg=4032.00, stdev=370.52, samples=2 00:11:41.348 lat (usec) : 1000=0.04% 00:11:41.348 lat (msec) : 2=0.01%, 4=0.17%, 10=7.76%, 20=72.51%, 50=19.32% 00:11:41.348 lat (msec) : 100=0.19% 00:11:41.348 cpu : usr=3.78%, sys=4.68%, ctx=256, majf=0, minf=1 00:11:41.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:41.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.348 issued rwts: total=3648,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.348 job3: (groupid=0, jobs=1): err= 0: pid=2051732: Fri Jul 26 11:19:36 2024 00:11:41.348 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:11:41.348 slat (usec): min=4, max=4782, avg=112.98, stdev=535.07 00:11:41.348 clat (usec): min=7731, max=18878, avg=14770.52, stdev=1656.13 00:11:41.348 lat (usec): min=7740, max=18904, avg=14883.50, stdev=1591.85 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[10814], 5.00th=[11994], 10.00th=[12911], 20.00th=[13698], 00:11:41.348 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[15139], 00:11:41.348 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:11:41.348 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:11:41.348 | 99.99th=[19006] 00:11:41.348 write: IOPS=4351, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1002msec); 0 zone resets 00:11:41.348 slat (usec): min=4, max=13790, avg=105.13, stdev=567.10 00:11:41.348 clat (usec): min=501, max=119213, avg=15285.26, stdev=12193.82 00:11:41.348 lat (usec): min=914, max=119221, avg=15390.38, stdev=12224.25 00:11:41.348 clat percentiles (usec): 00:11:41.348 | 1.00th=[ 1696], 5.00th=[ 3785], 10.00th=[ 10552], 20.00th=[ 13042], 00:11:41.348 | 30.00th=[ 13304], 40.00th=[ 13698], 50.00th=[ 13829], 60.00th=[ 14484], 00:11:41.348 | 70.00th=[ 15270], 80.00th=[ 15664], 90.00th=[ 16319], 95.00th=[ 16909], 00:11:41.348 | 99.00th=[ 96994], 99.50th=[107480], 99.90th=[119014], 99.95th=[119014], 00:11:41.348 | 99.99th=[119014] 00:11:41.348 bw ( KiB/s): min=16384, max=16384, per=24.56%, avg=16384.00, stdev= 0.00, samples=1 00:11:41.348 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:41.348 lat (usec) : 750=0.01%, 1000=0.13% 00:11:41.348 lat (msec) : 2=1.58%, 4=1.11%, 10=1.96%, 20=93.40%, 50=0.67% 00:11:41.348 lat (msec) : 100=0.67%, 250=0.45% 00:11:41.348 cpu : usr=3.30%, sys=7.49%, ctx=409, majf=0, minf=1 00:11:41.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:41.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.348 issued rwts: total=4096,4360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.348 00:11:41.348 Run status group 0 (all jobs): 00:11:41.348 READ: bw=61.2MiB/s (64.2MB/s), 11.9MiB/s-19.3MiB/s (12.5MB/s-20.2MB/s), io=61.6MiB (64.6MB), run=1002-1006msec 00:11:41.348 WRITE: bw=65.1MiB/s (68.3MB/s), 12.4MiB/s-20.0MiB/s (13.0MB/s-20.9MB/s), io=65.5MiB (68.7MB), run=1002-1006msec 00:11:41.348 00:11:41.348 Disk stats (read/write): 00:11:41.349 nvme0n1: ios=4032/4096, merge=0/0, ticks=16894/16382, in_queue=33276, util=99.90% 00:11:41.349 nvme0n2: ios=2760/3072, merge=0/0, ticks=15035/12034, in_queue=27069, util=100.00% 00:11:41.349 nvme0n3: ios=3114/3247, merge=0/0, ticks=42235/40299, in_queue=82534, util=99.36% 00:11:41.349 nvme0n4: ios=3199/3584, merge=0/0, ticks=11881/21869, in_queue=33750, util=89.27% 00:11:41.349 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:41.349 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2051867 00:11:41.349 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:41.349 11:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:41.349 [global] 00:11:41.349 thread=1 00:11:41.349 invalidate=1 00:11:41.349 rw=read 00:11:41.349 time_based=1 00:11:41.349 runtime=10 00:11:41.349 ioengine=libaio 00:11:41.349 direct=1 00:11:41.349 bs=4096 00:11:41.349 iodepth=1 00:11:41.349 norandommap=1 00:11:41.349 numjobs=1 00:11:41.349 00:11:41.349 [job0] 00:11:41.349 filename=/dev/nvme0n1 00:11:41.349 [job1] 00:11:41.349 filename=/dev/nvme0n2 00:11:41.349 [job2] 00:11:41.349 filename=/dev/nvme0n3 00:11:41.349 [job3] 00:11:41.349 filename=/dev/nvme0n4 00:11:41.349 Could not set queue depth (nvme0n1) 00:11:41.349 Could not set queue depth (nvme0n2) 00:11:41.349 Could not set queue depth (nvme0n3) 00:11:41.349 Could not set queue depth (nvme0n4) 00:11:41.349 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.349 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.349 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.349 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.349 fio-3.35 00:11:41.349 Starting 4 threads 00:11:44.628 11:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:44.628 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:44.628 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2891776, buflen=4096 00:11:44.628 fio: pid=2052080, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:44.886 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.886 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:44.886 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1081344, buflen=4096 00:11:44.886 fio: pid=2052079, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:45.144 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18542592, buflen=4096 00:11:45.144 fio: pid=2052053, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:45.144 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.144 11:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:45.709 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=12525568, buflen=4096 00:11:45.709 fio: pid=2052070, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:45.709 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.709 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:45.709 00:11:45.709 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2052053: Fri Jul 26 11:19:41 2024 00:11:45.709 read: IOPS=1226, BW=4903KiB/s (5021kB/s)(17.7MiB/3693msec) 00:11:45.709 slat (usec): min=5, max=35108, avg=26.97, stdev=628.46 00:11:45.710 clat (usec): min=259, max=41516, avg=781.27, stdev=4073.48 00:11:45.710 lat (usec): min=267, max=41524, avg=808.24, stdev=4121.07 00:11:45.710 clat percentiles (usec): 00:11:45.710 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:11:45.710 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 367], 00:11:45.710 | 70.00th=[ 388], 80.00th=[ 416], 90.00th=[ 461], 95.00th=[ 494], 00:11:45.710 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:45.710 | 99.99th=[41681] 00:11:45.710 bw ( KiB/s): min= 96, max=11440, per=55.59%, avg=4704.14, stdev=4960.04, samples=7 00:11:45.710 iops : min= 24, max= 2860, avg=1176.00, stdev=1239.98, samples=7 00:11:45.710 lat (usec) : 500=95.61%, 750=3.11%, 1000=0.18% 00:11:45.710 lat (msec) : 2=0.07%, 50=1.02% 00:11:45.710 cpu : usr=0.79%, sys=1.68%, ctx=4533, majf=0, minf=1 00:11:45.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.710 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2052070: Fri Jul 26 11:19:41 2024 00:11:45.710 read: IOPS=756, BW=3025KiB/s (3097kB/s)(11.9MiB/4044msec) 00:11:45.710 slat (usec): min=6, max=40822, avg=32.36, stdev=811.79 00:11:45.710 clat (usec): min=265, max=41580, avg=1287.54, stdev=5989.87 00:11:45.710 lat (usec): min=273, max=56737, avg=1317.69, stdev=6093.17 00:11:45.710 clat percentiles (usec): 00:11:45.710 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 347], 00:11:45.710 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 383], 00:11:45.710 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 465], 95.00th=[ 515], 00:11:45.710 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:11:45.710 | 99.99th=[41681] 00:11:45.710 bw ( KiB/s): min= 96, max=10048, per=40.78%, avg=3451.14, stdev=4133.67, samples=7 00:11:45.710 iops : min= 24, max= 2512, avg=862.71, stdev=1033.46, samples=7 00:11:45.710 lat (usec) : 500=93.85%, 750=3.69%, 1000=0.13% 00:11:45.710 lat (msec) : 2=0.07%, 50=2.22% 00:11:45.710 cpu : usr=0.57%, sys=1.16%, ctx=3062, majf=0, minf=1 00:11:45.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 issued rwts: total=3059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.710 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2052079: Fri Jul 26 11:19:41 2024 00:11:45.710 read: IOPS=77, BW=309KiB/s (317kB/s)(1056KiB/3413msec) 00:11:45.710 slat (usec): min=5, max=806, avg=15.64, stdev=49.24 00:11:45.710 clat (usec): min=288, max=41328, avg=12824.25, stdev=18771.25 00:11:45.710 lat (usec): min=296, max=41956, avg=12839.88, stdev=18778.60 00:11:45.710 clat percentiles (usec): 00:11:45.710 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:11:45.710 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 363], 60.00th=[ 424], 00:11:45.710 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:45.710 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:45.710 | 99.99th=[41157] 00:11:45.710 bw ( KiB/s): min= 96, max= 1544, per=3.99%, avg=338.67, stdev=590.50, samples=6 00:11:45.710 iops : min= 24, max= 386, avg=84.67, stdev=147.62, samples=6 00:11:45.710 lat (usec) : 500=65.66%, 750=3.40% 00:11:45.710 lat (msec) : 50=30.57% 00:11:45.710 cpu : usr=0.18%, sys=0.00%, ctx=267, majf=0, minf=1 00:11:45.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 issued rwts: total=265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.710 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2052080: Fri Jul 26 11:19:41 2024 00:11:45.710 read: IOPS=228, BW=913KiB/s (935kB/s)(2824KiB/3093msec) 00:11:45.710 slat (nsec): min=6244, max=50070, avg=9727.27, stdev=4315.02 00:11:45.710 clat (usec): min=279, max=41958, avg=4337.26, stdev=12070.40 00:11:45.710 lat (usec): min=287, max=41974, avg=4346.98, stdev=12072.91 00:11:45.710 clat percentiles (usec): 00:11:45.710 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 318], 00:11:45.710 | 30.00th=[ 343], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 396], 00:11:45.710 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 570], 95.00th=[41157], 00:11:45.710 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:45.710 | 99.99th=[42206] 00:11:45.710 bw ( KiB/s): min= 96, max= 2192, per=9.93%, avg=840.00, stdev=1043.86, samples=6 00:11:45.710 iops : min= 24, max= 548, avg=210.00, stdev=260.96, samples=6 00:11:45.710 lat (usec) : 500=89.53%, 750=0.57% 00:11:45.710 lat (msec) : 50=9.76% 00:11:45.710 cpu : usr=0.23%, sys=0.10%, ctx=708, majf=0, minf=1 00:11:45.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.710 issued rwts: total=707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.710 00:11:45.710 Run status group 0 (all jobs): 00:11:45.710 READ: bw=8462KiB/s (8665kB/s), 309KiB/s-4903KiB/s (317kB/s-5021kB/s), io=33.4MiB (35.0MB), run=3093-4044msec 00:11:45.710 00:11:45.710 Disk stats (read/write): 00:11:45.710 nvme0n1: ios=4337/0, merge=0/0, ticks=3694/0, in_queue=3694, util=98.05% 00:11:45.710 nvme0n2: ios=3053/0, merge=0/0, ticks=3708/0, in_queue=3708, util=95.17% 00:11:45.710 nvme0n3: ios=262/0, merge=0/0, ticks=3307/0, in_queue=3307, util=96.95% 00:11:45.710 nvme0n4: ios=707/0, merge=0/0, ticks=3069/0, in_queue=3069, util=96.64% 00:11:45.968 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.968 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:46.533 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.534 11:19:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:46.792 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:46.792 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:47.359 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:47.359 11:19:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:47.647 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:47.647 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2051867 00:11:47.647 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:47.647 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:47.905 nvmf hotplug test: fio failed as expected 00:11:47.905 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.470 rmmod nvme_tcp 00:11:48.470 rmmod nvme_fabrics 00:11:48.470 rmmod nvme_keyring 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2049684 ']' 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2049684 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2049684 ']' 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2049684 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2049684 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2049684' 00:11:48.470 killing process with pid 2049684 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2049684 00:11:48.470 11:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2049684 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.728 11:19:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.631 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.631 00:11:50.631 real 0m28.044s 00:11:50.631 user 1m41.769s 00:11:50.631 sys 0m7.227s 00:11:50.631 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.631 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.631 ************************************ 00:11:50.631 END TEST nvmf_fio_target 00:11:50.631 ************************************ 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:50.890 ************************************ 00:11:50.890 START TEST nvmf_bdevio 00:11:50.890 ************************************ 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:50.890 * Looking for test storage... 00:11:50.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.890 11:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:53.418 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:53.418 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:53.418 Found net devices under 0000:84:00.0: cvl_0_0 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:53.418 Found net devices under 0000:84:00.1: cvl_0_1 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.418 11:19:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.418 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:53.418 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:11:53.419 00:11:53.419 --- 10.0.0.2 ping statistics --- 00:11:53.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.419 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:11:53.419 00:11:53.419 --- 10.0.0.1 ping statistics --- 00:11:53.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.419 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.419 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2054862 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2054862 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2054862 ']' 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.677 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.677 [2024-07-26 11:19:49.222343] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:11:53.677 [2024-07-26 11:19:49.222542] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.677 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.935 [2024-07-26 11:19:49.379821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.935 [2024-07-26 11:19:49.579571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.935 [2024-07-26 11:19:49.579628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.935 [2024-07-26 11:19:49.579645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.935 [2024-07-26 11:19:49.579658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.935 [2024-07-26 11:19:49.579670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.935 [2024-07-26 11:19:49.579770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.935 [2024-07-26 11:19:49.580216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:53.935 [2024-07-26 11:19:49.580310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:53.935 [2024-07-26 11:19:49.583449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.193 [2024-07-26 11:19:49.775685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.193 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.194 Malloc0 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.194 [2024-07-26 11:19:49.833635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:54.194 { 00:11:54.194 "params": { 00:11:54.194 "name": "Nvme$subsystem", 00:11:54.194 "trtype": "$TEST_TRANSPORT", 00:11:54.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.194 "adrfam": "ipv4", 00:11:54.194 "trsvcid": "$NVMF_PORT", 00:11:54.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.194 "hdgst": ${hdgst:-false}, 00:11:54.194 "ddgst": ${ddgst:-false} 00:11:54.194 }, 00:11:54.194 "method": "bdev_nvme_attach_controller" 00:11:54.194 } 00:11:54.194 EOF 00:11:54.194 )") 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:54.194 11:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:54.194 "params": { 00:11:54.194 "name": "Nvme1", 00:11:54.194 "trtype": "tcp", 00:11:54.194 "traddr": "10.0.0.2", 00:11:54.194 "adrfam": "ipv4", 00:11:54.194 "trsvcid": "4420", 00:11:54.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.194 "hdgst": false, 00:11:54.194 "ddgst": false 00:11:54.194 }, 00:11:54.194 "method": "bdev_nvme_attach_controller" 00:11:54.194 }' 00:11:54.452 [2024-07-26 11:19:49.885946] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:11:54.452 [2024-07-26 11:19:49.886030] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055013 ] 00:11:54.452 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.452 [2024-07-26 11:19:49.955963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.452 [2024-07-26 11:19:50.096609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.452 [2024-07-26 11:19:50.096662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.452 [2024-07-26 11:19:50.096665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.709 I/O targets: 00:11:54.710 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:54.710 00:11:54.710 00:11:54.710 CUnit - A unit testing framework for C - Version 2.1-3 00:11:54.710 http://cunit.sourceforge.net/ 00:11:54.710 00:11:54.710 00:11:54.710 Suite: bdevio tests on: Nvme1n1 00:11:54.967 Test: blockdev write read block ...passed 00:11:54.967 Test: blockdev write zeroes read block ...passed 00:11:54.967 Test: blockdev write zeroes read no split ...passed 00:11:54.967 Test: blockdev write zeroes read split ...passed 00:11:54.967 Test: blockdev write zeroes read split partial ...passed 00:11:54.967 Test: blockdev reset ...[2024-07-26 11:19:50.536755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:54.967 [2024-07-26 11:19:50.536880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bbbd0 (9): Bad file descriptor 00:11:54.967 [2024-07-26 11:19:50.606004] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:54.967 passed 00:11:55.225 Test: blockdev write read 8 blocks ...passed 00:11:55.225 Test: blockdev write read size > 128k ...passed 00:11:55.225 Test: blockdev write read invalid size ...passed 00:11:55.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.225 Test: blockdev write read max offset ...passed 00:11:55.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.225 Test: blockdev writev readv 8 blocks ...passed 00:11:55.225 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.225 Test: blockdev writev readv block ...passed 00:11:55.225 Test: blockdev writev readv size > 128k ...passed 00:11:55.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.225 Test: blockdev comparev and writev ...[2024-07-26 11:19:50.820974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.821013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.821041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.821060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.821492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.821521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.821546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.821564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.821997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.822025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.822049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.822067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.822503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.822538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:55.225 [2024-07-26 11:19:50.822564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:55.225 [2024-07-26 11:19:50.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:55.225 passed 00:11:55.483 Test: blockdev nvme passthru rw ...passed 00:11:55.483 Test: blockdev nvme passthru vendor specific ...[2024-07-26 11:19:50.904810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.483 [2024-07-26 11:19:50.904842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:55.483 [2024-07-26 11:19:50.905108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.483 [2024-07-26 11:19:50.905134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:55.483 [2024-07-26 11:19:50.905355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.483 [2024-07-26 11:19:50.905382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:55.483 [2024-07-26 11:19:50.905600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.483 [2024-07-26 11:19:50.905628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:55.483 passed 00:11:55.483 Test: blockdev nvme admin passthru ...passed 00:11:55.483 Test: blockdev copy ...passed 00:11:55.483 00:11:55.483 Run Summary: Type Total Ran Passed Failed Inactive 00:11:55.483 suites 1 1 n/a 0 0 00:11:55.483 tests 23 23 23 0 0 00:11:55.483 asserts 152 152 152 0 n/a 00:11:55.483 00:11:55.483 Elapsed time = 1.260 seconds 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.740 rmmod nvme_tcp 00:11:55.740 rmmod nvme_fabrics 00:11:55.740 rmmod nvme_keyring 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2054862 ']' 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2054862 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2054862 ']' 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2054862 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2054862 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2054862' 00:11:55.740 killing process with pid 2054862 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2054862 00:11:55.740 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2054862 00:11:56.307 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.307 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.307 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.308 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.308 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.308 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.308 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.308 11:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.209 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.209 00:11:58.209 real 0m7.451s 00:11:58.209 user 0m11.741s 00:11:58.209 sys 0m2.726s 00:11:58.209 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.209 11:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:58.209 ************************************ 00:11:58.209 END TEST nvmf_bdevio 00:11:58.209 ************************************ 00:11:58.209 11:19:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:58.209 00:11:58.209 real 4m19.832s 00:11:58.209 user 11m10.670s 00:11:58.209 sys 1m18.914s 00:11:58.209 11:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.209 11:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.209 ************************************ 00:11:58.209 END TEST nvmf_target_core 00:11:58.209 ************************************ 00:11:58.209 11:19:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:58.209 11:19:53 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.209 11:19:53 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.209 11:19:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.468 ************************************ 00:11:58.468 START TEST nvmf_target_extra 00:11:58.468 ************************************ 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:58.468 * Looking for test storage... 00:11:58.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:58.468 11:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.469 11:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.469 ************************************ 00:11:58.469 START TEST nvmf_example 00:11:58.469 ************************************ 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:58.469 * Looking for test storage... 00:11:58.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.469 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.470 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.470 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.470 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.470 11:19:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:01.002 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:01.003 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:01.003 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:01.003 Found net devices under 0000:84:00.0: cvl_0_0 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:01.003 Found net devices under 0000:84:00.1: cvl_0_1 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:01.003 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.261 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:01.262 00:12:01.262 --- 10.0.0.2 ping statistics --- 00:12:01.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.262 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:01.262 00:12:01.262 --- 10.0.0.1 ping statistics --- 00:12:01.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.262 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2057275 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2057275 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2057275 ']' 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.262 11:19:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.520 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:01.778 11:19:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:01.778 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.776 Initializing NVMe Controllers 00:12:11.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:11.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:11.776 Initialization complete. Launching workers. 00:12:11.776 ======================================================== 00:12:11.776 Latency(us) 00:12:11.776 Device Information : IOPS MiB/s Average min max 00:12:11.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14082.90 55.01 4544.70 990.53 15210.67 00:12:11.776 ======================================================== 00:12:11.776 Total : 14082.90 55.01 4544.70 990.53 15210.67 00:12:11.776 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.777 rmmod nvme_tcp 00:12:11.777 rmmod nvme_fabrics 00:12:11.777 rmmod nvme_keyring 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2057275 ']' 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2057275 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2057275 ']' 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2057275 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.777 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2057275 00:12:12.035 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:12.035 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:12.035 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2057275' 00:12:12.035 killing process with pid 2057275 00:12:12.035 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2057275 00:12:12.035 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2057275 00:12:12.294 nvmf threads initialize successfully 00:12:12.294 bdev subsystem init successfully 00:12:12.294 created a nvmf target service 00:12:12.294 create targets's poll groups done 00:12:12.294 all subsystems of target started 00:12:12.294 nvmf target is running 00:12:12.294 all subsystems of target stopped 00:12:12.294 destroy targets's poll groups done 00:12:12.294 destroyed the nvmf target service 00:12:12.294 bdev subsystem finish successfully 00:12:12.294 nvmf threads destroy successfully 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.294 11:20:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.198 00:12:14.198 real 0m15.796s 00:12:14.198 user 0m42.052s 00:12:14.198 sys 0m3.943s 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.198 ************************************ 00:12:14.198 END TEST nvmf_example 00:12:14.198 ************************************ 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.198 11:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.461 ************************************ 00:12:14.461 START TEST nvmf_filesystem 00:12:14.461 ************************************ 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:14.461 * Looking for test storage... 00:12:14.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:14.461 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:14.462 #define SPDK_CONFIG_H 00:12:14.462 #define SPDK_CONFIG_APPS 1 00:12:14.462 #define SPDK_CONFIG_ARCH native 00:12:14.462 #undef SPDK_CONFIG_ASAN 00:12:14.462 #undef SPDK_CONFIG_AVAHI 00:12:14.462 #undef SPDK_CONFIG_CET 00:12:14.462 #define SPDK_CONFIG_COVERAGE 1 00:12:14.462 #define SPDK_CONFIG_CROSS_PREFIX 00:12:14.462 #undef SPDK_CONFIG_CRYPTO 00:12:14.462 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:14.462 #undef SPDK_CONFIG_CUSTOMOCF 00:12:14.462 #undef SPDK_CONFIG_DAOS 00:12:14.462 #define SPDK_CONFIG_DAOS_DIR 00:12:14.462 #define SPDK_CONFIG_DEBUG 1 00:12:14.462 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:14.462 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:14.462 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:14.462 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:14.462 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:14.462 #undef SPDK_CONFIG_DPDK_UADK 00:12:14.462 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:14.462 #define SPDK_CONFIG_EXAMPLES 1 00:12:14.462 #undef SPDK_CONFIG_FC 00:12:14.462 #define SPDK_CONFIG_FC_PATH 00:12:14.462 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:14.462 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:14.462 #undef SPDK_CONFIG_FUSE 00:12:14.462 #undef SPDK_CONFIG_FUZZER 00:12:14.462 #define SPDK_CONFIG_FUZZER_LIB 00:12:14.462 #undef SPDK_CONFIG_GOLANG 00:12:14.462 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:14.462 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:14.462 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:14.462 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:14.462 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:14.462 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:14.462 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:14.462 #define SPDK_CONFIG_IDXD 1 00:12:14.462 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:14.462 #undef SPDK_CONFIG_IPSEC_MB 00:12:14.462 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:14.462 #define SPDK_CONFIG_ISAL 1 00:12:14.462 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:14.462 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:14.462 #define SPDK_CONFIG_LIBDIR 00:12:14.462 #undef SPDK_CONFIG_LTO 00:12:14.462 #define SPDK_CONFIG_MAX_LCORES 128 00:12:14.462 #define SPDK_CONFIG_NVME_CUSE 1 00:12:14.462 #undef SPDK_CONFIG_OCF 00:12:14.462 #define SPDK_CONFIG_OCF_PATH 00:12:14.462 #define SPDK_CONFIG_OPENSSL_PATH 00:12:14.462 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:14.462 #define SPDK_CONFIG_PGO_DIR 00:12:14.462 #undef SPDK_CONFIG_PGO_USE 00:12:14.462 #define SPDK_CONFIG_PREFIX /usr/local 00:12:14.462 #undef SPDK_CONFIG_RAID5F 00:12:14.462 #undef SPDK_CONFIG_RBD 00:12:14.462 #define SPDK_CONFIG_RDMA 1 00:12:14.462 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:14.462 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:14.462 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:14.462 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:14.462 #define SPDK_CONFIG_SHARED 1 00:12:14.462 #undef SPDK_CONFIG_SMA 00:12:14.462 #define SPDK_CONFIG_TESTS 1 00:12:14.462 #undef SPDK_CONFIG_TSAN 00:12:14.462 #define SPDK_CONFIG_UBLK 1 00:12:14.462 #define SPDK_CONFIG_UBSAN 1 00:12:14.462 #undef SPDK_CONFIG_UNIT_TESTS 00:12:14.462 #undef SPDK_CONFIG_URING 00:12:14.462 #define SPDK_CONFIG_URING_PATH 00:12:14.462 #undef SPDK_CONFIG_URING_ZNS 00:12:14.462 #undef SPDK_CONFIG_USDT 00:12:14.462 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:14.462 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:14.462 #define SPDK_CONFIG_VFIO_USER 1 00:12:14.462 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:14.462 #define SPDK_CONFIG_VHOST 1 00:12:14.462 #define SPDK_CONFIG_VIRTIO 1 00:12:14.462 #undef SPDK_CONFIG_VTUNE 00:12:14.462 #define SPDK_CONFIG_VTUNE_DIR 00:12:14.462 #define SPDK_CONFIG_WERROR 1 00:12:14.462 #define SPDK_CONFIG_WPDK_DIR 00:12:14.462 #undef SPDK_CONFIG_XNVME 00:12:14.462 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.462 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:14.463 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:14.463 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:14.464 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2058845 ]] 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2058845 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.k9OQ7m 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.k9OQ7m/tests/target /tmp/spdk.k9OQ7m 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=949354496 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4335075328 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=38641209344 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=45083295744 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6442086400 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22531727360 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=8994226176 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=9016659968 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22433792 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22540746752 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=901120 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:14.465 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4508323840 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4508327936 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:14.466 * Looking for test storage... 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=38641209344 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8656678912 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.466 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.467 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.755 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:17.756 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:17.756 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:17.756 Found net devices under 0000:84:00.0: cvl_0_0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:17.756 Found net devices under 0000:84:00.1: cvl_0_1 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:17.756 00:12:17.756 --- 10.0.0.2 ping statistics --- 00:12:17.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.756 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:17.756 00:12:17.756 --- 10.0.0.1 ping statistics --- 00:12:17.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.756 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.756 ************************************ 00:12:17.756 START TEST nvmf_filesystem_no_in_capsule 00:12:17.756 ************************************ 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:17.756 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2060605 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2060605 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2060605 ']' 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.757 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.757 [2024-07-26 11:20:13.051451] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:12:17.757 [2024-07-26 11:20:13.051566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.757 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.757 [2024-07-26 11:20:13.161882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.757 [2024-07-26 11:20:13.288786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.757 [2024-07-26 11:20:13.288843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.757 [2024-07-26 11:20:13.288860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.757 [2024-07-26 11:20:13.288875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.757 [2024-07-26 11:20:13.288887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.757 [2024-07-26 11:20:13.288943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.757 [2024-07-26 11:20:13.288973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.757 [2024-07-26 11:20:13.289041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.757 [2024-07-26 11:20:13.289045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 [2024-07-26 11:20:13.455284] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 [2024-07-26 11:20:13.650314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:18.016 { 00:12:18.016 "name": "Malloc1", 00:12:18.016 "aliases": [ 00:12:18.016 "d20d8920-b2f9-4516-b957-f5e6912b8ead" 00:12:18.016 ], 00:12:18.016 "product_name": "Malloc disk", 00:12:18.016 "block_size": 512, 00:12:18.016 "num_blocks": 1048576, 00:12:18.016 "uuid": "d20d8920-b2f9-4516-b957-f5e6912b8ead", 00:12:18.016 "assigned_rate_limits": { 00:12:18.016 "rw_ios_per_sec": 0, 00:12:18.016 "rw_mbytes_per_sec": 0, 00:12:18.016 "r_mbytes_per_sec": 0, 00:12:18.016 "w_mbytes_per_sec": 0 00:12:18.016 }, 00:12:18.016 "claimed": true, 00:12:18.016 "claim_type": "exclusive_write", 00:12:18.016 "zoned": false, 00:12:18.016 "supported_io_types": { 00:12:18.016 "read": true, 00:12:18.016 "write": true, 00:12:18.016 "unmap": true, 00:12:18.016 "flush": true, 00:12:18.016 "reset": true, 00:12:18.016 "nvme_admin": false, 00:12:18.016 "nvme_io": false, 00:12:18.016 "nvme_io_md": false, 00:12:18.016 "write_zeroes": true, 00:12:18.016 "zcopy": true, 00:12:18.016 "get_zone_info": false, 00:12:18.016 "zone_management": false, 00:12:18.016 "zone_append": false, 00:12:18.016 "compare": false, 00:12:18.016 "compare_and_write": false, 00:12:18.016 "abort": true, 00:12:18.016 "seek_hole": false, 00:12:18.016 "seek_data": false, 00:12:18.016 "copy": true, 00:12:18.016 "nvme_iov_md": false 00:12:18.016 }, 00:12:18.016 "memory_domains": [ 00:12:18.016 { 00:12:18.016 "dma_device_id": "system", 00:12:18.016 "dma_device_type": 1 00:12:18.016 }, 00:12:18.016 { 00:12:18.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.016 "dma_device_type": 2 00:12:18.016 } 00:12:18.016 ], 00:12:18.016 "driver_specific": {} 00:12:18.016 } 00:12:18.016 ]' 00:12:18.016 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:18.274 11:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.840 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.840 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:18.840 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.840 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:18.840 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:21.369 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:21.627 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.561 ************************************ 00:12:22.561 START TEST filesystem_ext4 00:12:22.561 ************************************ 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:22.561 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:22.561 mke2fs 1.46.5 (30-Dec-2021) 00:12:22.819 Discarding device blocks: 0/522240 done 00:12:22.819 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:22.819 Filesystem UUID: a3ffe111-274a-4cac-82d6-631c1d7da0e8 00:12:22.819 Superblock backups stored on blocks: 00:12:22.819 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:22.819 00:12:22.819 Allocating group tables: 0/64 done 00:12:22.819 Writing inode tables: 0/64 done 00:12:23.076 Creating journal (8192 blocks): done 00:12:23.900 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:12:23.900 00:12:23.900 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:23.900 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.158 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2060605 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.159 00:12:24.159 real 0m1.572s 00:12:24.159 user 0m0.022s 00:12:24.159 sys 0m0.049s 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:24.159 ************************************ 00:12:24.159 END TEST filesystem_ext4 00:12:24.159 ************************************ 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.159 ************************************ 00:12:24.159 START TEST filesystem_btrfs 00:12:24.159 ************************************ 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:24.159 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:24.725 btrfs-progs v6.6.2 00:12:24.725 See https://btrfs.readthedocs.io for more information. 00:12:24.725 00:12:24.725 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:24.725 NOTE: several default settings have changed in version 5.15, please make sure 00:12:24.725 this does not affect your deployments: 00:12:24.725 - DUP for metadata (-m dup) 00:12:24.725 - enabled no-holes (-O no-holes) 00:12:24.725 - enabled free-space-tree (-R free-space-tree) 00:12:24.725 00:12:24.725 Label: (null) 00:12:24.725 UUID: 7ad06699-b101-45df-9997-6bec3e7e85c5 00:12:24.725 Node size: 16384 00:12:24.725 Sector size: 4096 00:12:24.725 Filesystem size: 510.00MiB 00:12:24.725 Block group profiles: 00:12:24.725 Data: single 8.00MiB 00:12:24.725 Metadata: DUP 32.00MiB 00:12:24.725 System: DUP 8.00MiB 00:12:24.725 SSD detected: yes 00:12:24.725 Zoned device: no 00:12:24.725 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:24.725 Runtime features: free-space-tree 00:12:24.725 Checksum: crc32c 00:12:24.725 Number of devices: 1 00:12:24.725 Devices: 00:12:24.725 ID SIZE PATH 00:12:24.725 1 510.00MiB /dev/nvme0n1p1 00:12:24.725 00:12:24.725 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:24.725 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2060605 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.981 00:12:24.981 real 0m0.838s 00:12:24.981 user 0m0.024s 00:12:24.981 sys 0m0.109s 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:24.981 ************************************ 00:12:24.981 END TEST filesystem_btrfs 00:12:24.981 ************************************ 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.981 ************************************ 00:12:24.981 START TEST filesystem_xfs 00:12:24.981 ************************************ 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:24.981 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.238 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.238 = sectsz=512 attr=2, projid32bit=1 00:12:25.238 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.238 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.238 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.238 = sunit=0 swidth=0 blks 00:12:25.238 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.239 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.239 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.239 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:26.171 Discarding blocks...Done. 00:12:26.171 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:26.171 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2060605 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.065 00:12:28.065 real 0m3.084s 00:12:28.065 user 0m0.019s 00:12:28.065 sys 0m0.052s 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.065 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.065 ************************************ 00:12:28.065 END TEST filesystem_xfs 00:12:28.065 ************************************ 00:12:28.322 11:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2060605 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2060605 ']' 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2060605 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2060605 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2060605' 00:12:28.580 killing process with pid 2060605 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2060605 00:12:28.580 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2060605 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.146 00:12:29.146 real 0m11.774s 00:12:29.146 user 0m44.877s 00:12:29.146 sys 0m1.676s 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.146 ************************************ 00:12:29.146 END TEST nvmf_filesystem_no_in_capsule 00:12:29.146 ************************************ 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.146 ************************************ 00:12:29.146 START TEST nvmf_filesystem_in_capsule 00:12:29.146 ************************************ 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2062171 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2062171 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2062171 ']' 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.146 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.423 [2024-07-26 11:20:24.844862] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:12:29.423 [2024-07-26 11:20:24.844953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.423 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.423 [2024-07-26 11:20:24.923364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.423 [2024-07-26 11:20:25.045863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.423 [2024-07-26 11:20:25.045923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.423 [2024-07-26 11:20:25.045940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.423 [2024-07-26 11:20:25.045955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.423 [2024-07-26 11:20:25.045968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.423 [2024-07-26 11:20:25.046053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.423 [2024-07-26 11:20:25.046107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.423 [2024-07-26 11:20:25.046160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.423 [2024-07-26 11:20:25.046163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.690 [2024-07-26 11:20:25.217994] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.690 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.948 Malloc1 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.949 [2024-07-26 11:20:25.403807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:29.949 { 00:12:29.949 "name": "Malloc1", 00:12:29.949 "aliases": [ 00:12:29.949 "71360058-476e-481b-8009-34cf2dfd96c9" 00:12:29.949 ], 00:12:29.949 "product_name": "Malloc disk", 00:12:29.949 "block_size": 512, 00:12:29.949 "num_blocks": 1048576, 00:12:29.949 "uuid": "71360058-476e-481b-8009-34cf2dfd96c9", 00:12:29.949 "assigned_rate_limits": { 00:12:29.949 "rw_ios_per_sec": 0, 00:12:29.949 "rw_mbytes_per_sec": 0, 00:12:29.949 "r_mbytes_per_sec": 0, 00:12:29.949 "w_mbytes_per_sec": 0 00:12:29.949 }, 00:12:29.949 "claimed": true, 00:12:29.949 "claim_type": "exclusive_write", 00:12:29.949 "zoned": false, 00:12:29.949 "supported_io_types": { 00:12:29.949 "read": true, 00:12:29.949 "write": true, 00:12:29.949 "unmap": true, 00:12:29.949 "flush": true, 00:12:29.949 "reset": true, 00:12:29.949 "nvme_admin": false, 00:12:29.949 "nvme_io": false, 00:12:29.949 "nvme_io_md": false, 00:12:29.949 "write_zeroes": true, 00:12:29.949 "zcopy": true, 00:12:29.949 "get_zone_info": false, 00:12:29.949 "zone_management": false, 00:12:29.949 "zone_append": false, 00:12:29.949 "compare": false, 00:12:29.949 "compare_and_write": false, 00:12:29.949 "abort": true, 00:12:29.949 "seek_hole": false, 00:12:29.949 "seek_data": false, 00:12:29.949 "copy": true, 00:12:29.949 "nvme_iov_md": false 00:12:29.949 }, 00:12:29.949 "memory_domains": [ 00:12:29.949 { 00:12:29.949 "dma_device_id": "system", 00:12:29.949 "dma_device_type": 1 00:12:29.949 }, 00:12:29.949 { 00:12:29.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.949 "dma_device_type": 2 00:12:29.949 } 00:12:29.949 ], 00:12:29.949 "driver_specific": {} 00:12:29.949 } 00:12:29.949 ]' 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:29.949 11:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.880 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.880 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:30.880 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.880 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:30.880 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:32.929 11:20:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:33.862 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.796 ************************************ 00:12:34.796 START TEST filesystem_in_capsule_ext4 00:12:34.796 ************************************ 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:34.796 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:34.796 mke2fs 1.46.5 (30-Dec-2021) 00:12:34.796 Discarding device blocks: 0/522240 done 00:12:34.796 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:34.796 Filesystem UUID: 9b099a6f-9f47-4296-a1fe-34e4a119cc6b 00:12:34.796 Superblock backups stored on blocks: 00:12:34.796 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:34.796 00:12:34.796 Allocating group tables: 0/64 done 00:12:34.796 Writing inode tables: 0/64 done 00:12:35.729 Creating journal (8192 blocks): done 00:12:35.729 Writing superblocks and filesystem accounting information: 0/64 done 00:12:35.729 00:12:35.729 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:35.729 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2062171 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:36.665 00:12:36.665 real 0m1.871s 00:12:36.665 user 0m0.022s 00:12:36.665 sys 0m0.049s 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:36.665 ************************************ 00:12:36.665 END TEST filesystem_in_capsule_ext4 00:12:36.665 ************************************ 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.665 ************************************ 00:12:36.665 START TEST filesystem_in_capsule_btrfs 00:12:36.665 ************************************ 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:36.665 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:36.924 btrfs-progs v6.6.2 00:12:36.924 See https://btrfs.readthedocs.io for more information. 00:12:36.924 00:12:36.924 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:36.924 NOTE: several default settings have changed in version 5.15, please make sure 00:12:36.924 this does not affect your deployments: 00:12:36.924 - DUP for metadata (-m dup) 00:12:36.924 - enabled no-holes (-O no-holes) 00:12:36.924 - enabled free-space-tree (-R free-space-tree) 00:12:36.924 00:12:36.924 Label: (null) 00:12:36.924 UUID: 80948172-ceed-407a-b37b-d01b2e6ae27e 00:12:36.924 Node size: 16384 00:12:36.924 Sector size: 4096 00:12:36.924 Filesystem size: 510.00MiB 00:12:36.924 Block group profiles: 00:12:36.924 Data: single 8.00MiB 00:12:36.924 Metadata: DUP 32.00MiB 00:12:36.924 System: DUP 8.00MiB 00:12:36.924 SSD detected: yes 00:12:36.924 Zoned device: no 00:12:36.924 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:36.924 Runtime features: free-space-tree 00:12:36.924 Checksum: crc32c 00:12:36.924 Number of devices: 1 00:12:36.924 Devices: 00:12:36.924 ID SIZE PATH 00:12:36.924 1 510.00MiB /dev/nvme0n1p1 00:12:36.924 00:12:36.924 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:36.924 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.182 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.182 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:37.182 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.182 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2062171 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.441 00:12:37.441 real 0m0.706s 00:12:37.441 user 0m0.021s 00:12:37.441 sys 0m0.114s 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:37.441 ************************************ 00:12:37.441 END TEST filesystem_in_capsule_btrfs 00:12:37.441 ************************************ 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.441 ************************************ 00:12:37.441 START TEST filesystem_in_capsule_xfs 00:12:37.441 ************************************ 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:37.441 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:37.441 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:37.442 = sectsz=512 attr=2, projid32bit=1 00:12:37.442 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:37.442 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:37.442 data = bsize=4096 blocks=130560, imaxpct=25 00:12:37.442 = sunit=0 swidth=0 blks 00:12:37.442 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:37.442 log =internal log bsize=4096 blocks=16384, version=2 00:12:37.442 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:37.442 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:38.375 Discarding blocks...Done. 00:12:38.375 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:38.375 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2062171 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:40.905 00:12:40.905 real 0m3.181s 00:12:40.905 user 0m0.023s 00:12:40.905 sys 0m0.058s 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:40.905 ************************************ 00:12:40.905 END TEST filesystem_in_capsule_xfs 00:12:40.905 ************************************ 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.905 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2062171 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2062171 ']' 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2062171 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2062171 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2062171' 00:12:40.906 killing process with pid 2062171 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2062171 00:12:40.906 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2062171 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:41.473 00:12:41.473 real 0m12.050s 00:12:41.473 user 0m46.241s 00:12:41.473 sys 0m1.650s 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.473 ************************************ 00:12:41.473 END TEST nvmf_filesystem_in_capsule 00:12:41.473 ************************************ 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.473 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.474 rmmod nvme_tcp 00:12:41.474 rmmod nvme_fabrics 00:12:41.474 rmmod nvme_keyring 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.474 11:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.378 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.378 00:12:43.378 real 0m29.096s 00:12:43.378 user 1m32.141s 00:12:43.378 sys 0m5.586s 00:12:43.378 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.378 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.378 ************************************ 00:12:43.378 END TEST nvmf_filesystem 00:12:43.378 ************************************ 00:12:43.378 11:20:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:43.378 11:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.378 11:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.378 11:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.637 ************************************ 00:12:43.637 START TEST nvmf_target_discovery 00:12:43.637 ************************************ 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:43.637 * Looking for test storage... 00:12:43.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.637 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.638 11:20:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:46.174 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:46.174 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:46.174 Found net devices under 0000:84:00.0: cvl_0_0 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:46.174 Found net devices under 0000:84:00.1: cvl_0_1 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.174 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.175 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.175 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.175 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.175 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.175 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.175 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:12:46.433 00:12:46.433 --- 10.0.0.2 ping statistics --- 00:12:46.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.433 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:46.433 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:12:46.434 00:12:46.434 --- 10.0.0.1 ping statistics --- 00:12:46.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.434 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.434 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2065780 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2065780 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2065780 ']' 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.434 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.434 [2024-07-26 11:20:42.083223] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:12:46.434 [2024-07-26 11:20:42.083319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.692 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.692 [2024-07-26 11:20:42.159185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.692 [2024-07-26 11:20:42.285150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.692 [2024-07-26 11:20:42.285217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.692 [2024-07-26 11:20:42.285245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.692 [2024-07-26 11:20:42.285267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.692 [2024-07-26 11:20:42.285286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.692 [2024-07-26 11:20:42.285389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.692 [2024-07-26 11:20:42.285459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.692 [2024-07-26 11:20:42.285494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.692 [2024-07-26 11:20:42.285502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.260 [2024-07-26 11:20:42.656958] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.260 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 Null1 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 [2024-07-26 11:20:42.697293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 Null2 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 Null3 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 Null4 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.261 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:12:47.565 00:12:47.565 Discovery Log Number of Records 6, Generation counter 6 00:12:47.565 =====Discovery Log Entry 0====== 00:12:47.565 trtype: tcp 00:12:47.565 adrfam: ipv4 00:12:47.565 subtype: current discovery subsystem 00:12:47.565 treq: not required 00:12:47.565 portid: 0 00:12:47.565 trsvcid: 4420 00:12:47.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:47.565 traddr: 10.0.0.2 00:12:47.565 eflags: explicit discovery connections, duplicate discovery information 00:12:47.565 sectype: none 00:12:47.565 =====Discovery Log Entry 1====== 00:12:47.565 trtype: tcp 00:12:47.565 adrfam: ipv4 00:12:47.565 subtype: nvme subsystem 00:12:47.565 treq: not required 00:12:47.565 portid: 0 00:12:47.565 trsvcid: 4420 00:12:47.565 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:47.565 traddr: 10.0.0.2 00:12:47.565 eflags: none 00:12:47.565 sectype: none 00:12:47.565 =====Discovery Log Entry 2====== 00:12:47.565 trtype: tcp 00:12:47.565 adrfam: ipv4 00:12:47.565 subtype: nvme subsystem 00:12:47.565 treq: not required 00:12:47.565 portid: 0 00:12:47.565 trsvcid: 4420 00:12:47.565 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:47.565 traddr: 10.0.0.2 00:12:47.565 eflags: none 00:12:47.565 sectype: none 00:12:47.565 =====Discovery Log Entry 3====== 00:12:47.565 trtype: tcp 00:12:47.565 adrfam: ipv4 00:12:47.565 subtype: nvme subsystem 00:12:47.565 treq: not required 00:12:47.565 portid: 0 00:12:47.565 trsvcid: 4420 00:12:47.565 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:47.565 traddr: 10.0.0.2 00:12:47.565 eflags: none 00:12:47.565 sectype: none 00:12:47.565 =====Discovery Log Entry 4====== 00:12:47.565 trtype: tcp 00:12:47.565 adrfam: ipv4 00:12:47.565 subtype: nvme subsystem 00:12:47.565 treq: not required 00:12:47.565 portid: 0 00:12:47.565 trsvcid: 4420 00:12:47.565 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:47.565 traddr: 10.0.0.2 00:12:47.565 eflags: none 00:12:47.565 sectype: none 00:12:47.565 =====Discovery Log Entry 5====== 00:12:47.565 trtype: tcp 00:12:47.565 adrfam: ipv4 00:12:47.565 subtype: discovery subsystem referral 00:12:47.565 treq: not required 00:12:47.565 portid: 0 00:12:47.565 trsvcid: 4430 00:12:47.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:47.565 traddr: 10.0.0.2 00:12:47.565 eflags: none 00:12:47.565 sectype: none 00:12:47.565 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:47.565 Perform nvmf subsystem discovery via RPC 00:12:47.565 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:47.565 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.565 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.565 [ 00:12:47.565 { 00:12:47.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.565 "subtype": "Discovery", 00:12:47.565 "listen_addresses": [ 00:12:47.565 { 00:12:47.565 "trtype": "TCP", 00:12:47.565 "adrfam": "IPv4", 00:12:47.565 "traddr": "10.0.0.2", 00:12:47.565 "trsvcid": "4420" 00:12:47.565 } 00:12:47.565 ], 00:12:47.565 "allow_any_host": true, 00:12:47.565 "hosts": [] 00:12:47.565 }, 00:12:47.565 { 00:12:47.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.565 "subtype": "NVMe", 00:12:47.565 "listen_addresses": [ 00:12:47.565 { 00:12:47.565 "trtype": "TCP", 00:12:47.565 "adrfam": "IPv4", 00:12:47.565 "traddr": "10.0.0.2", 00:12:47.565 "trsvcid": "4420" 00:12:47.565 } 00:12:47.565 ], 00:12:47.565 "allow_any_host": true, 00:12:47.565 "hosts": [], 00:12:47.565 "serial_number": "SPDK00000000000001", 00:12:47.565 "model_number": "SPDK bdev Controller", 00:12:47.565 "max_namespaces": 32, 00:12:47.565 "min_cntlid": 1, 00:12:47.565 "max_cntlid": 65519, 00:12:47.565 "namespaces": [ 00:12:47.565 { 00:12:47.565 "nsid": 1, 00:12:47.565 "bdev_name": "Null1", 00:12:47.565 "name": "Null1", 00:12:47.565 "nguid": "4FF661AA5A7F4CFA9E8448675E2875D1", 00:12:47.565 "uuid": "4ff661aa-5a7f-4cfa-9e84-48675e2875d1" 00:12:47.565 } 00:12:47.565 ] 00:12:47.565 }, 00:12:47.565 { 00:12:47.565 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:47.565 "subtype": "NVMe", 00:12:47.565 "listen_addresses": [ 00:12:47.565 { 00:12:47.565 "trtype": "TCP", 00:12:47.565 "adrfam": "IPv4", 00:12:47.565 "traddr": "10.0.0.2", 00:12:47.565 "trsvcid": "4420" 00:12:47.565 } 00:12:47.565 ], 00:12:47.565 "allow_any_host": true, 00:12:47.565 "hosts": [], 00:12:47.565 "serial_number": "SPDK00000000000002", 00:12:47.565 "model_number": "SPDK bdev Controller", 00:12:47.565 "max_namespaces": 32, 00:12:47.565 "min_cntlid": 1, 00:12:47.565 "max_cntlid": 65519, 00:12:47.565 "namespaces": [ 00:12:47.565 { 00:12:47.565 "nsid": 1, 00:12:47.565 "bdev_name": "Null2", 00:12:47.565 "name": "Null2", 00:12:47.565 "nguid": "9F92237853844C61A4BF2995625A66F7", 00:12:47.565 "uuid": "9f922378-5384-4c61-a4bf-2995625a66f7" 00:12:47.565 } 00:12:47.565 ] 00:12:47.565 }, 00:12:47.565 { 00:12:47.565 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:47.565 "subtype": "NVMe", 00:12:47.565 "listen_addresses": [ 00:12:47.565 { 00:12:47.565 "trtype": "TCP", 00:12:47.565 "adrfam": "IPv4", 00:12:47.565 "traddr": "10.0.0.2", 00:12:47.565 "trsvcid": "4420" 00:12:47.565 } 00:12:47.565 ], 00:12:47.566 "allow_any_host": true, 00:12:47.566 "hosts": [], 00:12:47.566 "serial_number": "SPDK00000000000003", 00:12:47.566 "model_number": "SPDK bdev Controller", 00:12:47.566 "max_namespaces": 32, 00:12:47.566 "min_cntlid": 1, 00:12:47.566 "max_cntlid": 65519, 00:12:47.566 "namespaces": [ 00:12:47.566 { 00:12:47.566 "nsid": 1, 00:12:47.566 "bdev_name": "Null3", 00:12:47.566 "name": "Null3", 00:12:47.566 "nguid": "7DD492F54CD24DB19CB9FFD25C817305", 00:12:47.566 "uuid": "7dd492f5-4cd2-4db1-9cb9-ffd25c817305" 00:12:47.566 } 00:12:47.566 ] 00:12:47.566 }, 00:12:47.566 { 00:12:47.566 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:47.566 "subtype": "NVMe", 00:12:47.566 "listen_addresses": [ 00:12:47.566 { 00:12:47.566 "trtype": "TCP", 00:12:47.566 "adrfam": "IPv4", 00:12:47.566 "traddr": "10.0.0.2", 00:12:47.566 "trsvcid": "4420" 00:12:47.566 } 00:12:47.566 ], 00:12:47.566 "allow_any_host": true, 00:12:47.566 "hosts": [], 00:12:47.566 "serial_number": "SPDK00000000000004", 00:12:47.566 "model_number": "SPDK bdev Controller", 00:12:47.566 "max_namespaces": 32, 00:12:47.566 "min_cntlid": 1, 00:12:47.566 "max_cntlid": 65519, 00:12:47.566 "namespaces": [ 00:12:47.566 { 00:12:47.566 "nsid": 1, 00:12:47.566 "bdev_name": "Null4", 00:12:47.566 "name": "Null4", 00:12:47.566 "nguid": "1D0DFDD2E3EA4199A9FDE82EC0D29619", 00:12:47.566 "uuid": "1d0dfdd2-e3ea-4199-a9fd-e82ec0d29619" 00:12:47.566 } 00:12:47.566 ] 00:12:47.566 } 00:12:47.566 ] 00:12:47.566 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:47.566 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.566 rmmod nvme_tcp 00:12:47.566 rmmod nvme_fabrics 00:12:47.566 rmmod nvme_keyring 00:12:47.566 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2065780 ']' 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2065780 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2065780 ']' 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2065780 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2065780 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2065780' 00:12:47.826 killing process with pid 2065780 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2065780 00:12:47.826 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2065780 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.085 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.989 00:12:49.989 real 0m6.516s 00:12:49.989 user 0m5.971s 00:12:49.989 sys 0m2.534s 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:49.989 ************************************ 00:12:49.989 END TEST nvmf_target_discovery 00:12:49.989 ************************************ 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.989 ************************************ 00:12:49.989 START TEST nvmf_referrals 00:12:49.989 ************************************ 00:12:49.989 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:50.247 * Looking for test storage... 00:12:50.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.247 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:50.248 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.782 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:52.783 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:52.783 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:52.783 Found net devices under 0000:84:00.0: cvl_0_0 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:52.783 Found net devices under 0000:84:00.1: cvl_0_1 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:52.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:12:52.783 00:12:52.783 --- 10.0.0.2 ping statistics --- 00:12:52.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.783 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:12:52.783 00:12:52.783 --- 10.0.0.1 ping statistics --- 00:12:52.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.783 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.783 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.041 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:53.041 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.041 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:53.041 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.041 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2067898 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2067898 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2067898 ']' 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.042 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.042 [2024-07-26 11:20:48.528611] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:12:53.042 [2024-07-26 11:20:48.528726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.042 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.042 [2024-07-26 11:20:48.619442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.300 [2024-07-26 11:20:48.745761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.300 [2024-07-26 11:20:48.745825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.300 [2024-07-26 11:20:48.745853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.300 [2024-07-26 11:20:48.745875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.300 [2024-07-26 11:20:48.745894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.300 [2024-07-26 11:20:48.746000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.300 [2024-07-26 11:20:48.746096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.300 [2024-07-26 11:20:48.746149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.300 [2024-07-26 11:20:48.746165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.300 [2024-07-26 11:20:48.924030] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.300 [2024-07-26 11:20:48.936262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.300 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.558 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.558 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.558 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.558 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:53.558 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.558 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.558 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.816 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.074 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.332 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.590 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.848 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:54.848 rmmod nvme_tcp 00:12:54.848 rmmod nvme_fabrics 00:12:54.848 rmmod nvme_keyring 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2067898 ']' 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2067898 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2067898 ']' 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2067898 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2067898 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2067898' 00:12:54.849 killing process with pid 2067898 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2067898 00:12:54.849 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2067898 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.416 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.321 00:12:57.321 real 0m7.209s 00:12:57.321 user 0m9.542s 00:12:57.321 sys 0m2.675s 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 ************************************ 00:12:57.321 END TEST nvmf_referrals 00:12:57.321 ************************************ 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.321 ************************************ 00:12:57.321 START TEST nvmf_connect_disconnect 00:12:57.321 ************************************ 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:57.321 * Looking for test storage... 00:12:57.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.321 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.641 11:20:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:00.175 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:00.175 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.175 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:00.176 Found net devices under 0000:84:00.0: cvl_0_0 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:00.176 Found net devices under 0000:84:00.1: cvl_0_1 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:13:00.176 00:13:00.176 --- 10.0.0.2 ping statistics --- 00:13:00.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.176 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:13:00.176 00:13:00.176 --- 10.0.0.1 ping statistics --- 00:13:00.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.176 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2070220 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2070220 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2070220 ']' 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.176 11:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.176 [2024-07-26 11:20:55.730088] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:13:00.176 [2024-07-26 11:20:55.730263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.176 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.434 [2024-07-26 11:20:55.836422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.434 [2024-07-26 11:20:55.961668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.434 [2024-07-26 11:20:55.961736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.434 [2024-07-26 11:20:55.961761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.434 [2024-07-26 11:20:55.961782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.434 [2024-07-26 11:20:55.961801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.434 [2024-07-26 11:20:55.961899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.434 [2024-07-26 11:20:55.961955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.434 [2024-07-26 11:20:55.962014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.434 [2024-07-26 11:20:55.962023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.693 [2024-07-26 11:20:56.140238] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:00.693 [2024-07-26 11:20:56.202701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:00.693 11:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:03.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.868 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:14.868 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:14.868 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.868 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.869 rmmod nvme_tcp 00:13:14.869 rmmod nvme_fabrics 00:13:14.869 rmmod nvme_keyring 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2070220 ']' 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2070220 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2070220 ']' 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2070220 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2070220 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2070220' 00:13:14.869 killing process with pid 2070220 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2070220 00:13:14.869 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2070220 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.869 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.773 00:13:16.773 real 0m19.383s 00:13:16.773 user 0m56.699s 00:13:16.773 sys 0m3.718s 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.773 ************************************ 00:13:16.773 END TEST nvmf_connect_disconnect 00:13:16.773 ************************************ 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:16.773 ************************************ 00:13:16.773 START TEST nvmf_multitarget 00:13:16.773 ************************************ 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:16.773 * Looking for test storage... 00:13:16.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.773 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.033 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.034 11:21:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:19.570 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:19.570 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.570 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:19.571 Found net devices under 0000:84:00.0: cvl_0_0 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:19.571 Found net devices under 0000:84:00.1: cvl_0_1 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:13:19.571 00:13:19.571 --- 10.0.0.2 ping statistics --- 00:13:19.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.571 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:13:19.571 00:13:19.571 --- 10.0.0.1 ping statistics --- 00:13:19.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.571 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2074593 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2074593 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2074593 ']' 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.571 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.829 [2024-07-26 11:21:15.306810] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:13:19.829 [2024-07-26 11:21:15.306897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.829 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.829 [2024-07-26 11:21:15.412182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.087 [2024-07-26 11:21:15.540466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.087 [2024-07-26 11:21:15.540536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.087 [2024-07-26 11:21:15.540571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.087 [2024-07-26 11:21:15.540594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.087 [2024-07-26 11:21:15.540614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.087 [2024-07-26 11:21:15.540691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.087 [2024-07-26 11:21:15.540754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.087 [2024-07-26 11:21:15.540809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.087 [2024-07-26 11:21:15.540817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.087 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:20.088 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:20.345 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:20.345 11:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:20.603 "nvmf_tgt_1" 00:13:20.603 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:20.861 "nvmf_tgt_2" 00:13:20.861 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:20.861 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:20.861 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:20.861 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:21.119 true 00:13:21.119 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:21.119 true 00:13:21.119 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:21.119 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.377 rmmod nvme_tcp 00:13:21.377 rmmod nvme_fabrics 00:13:21.377 rmmod nvme_keyring 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:21.377 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2074593 ']' 00:13:21.378 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2074593 00:13:21.378 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2074593 ']' 00:13:21.378 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2074593 00:13:21.378 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:21.378 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.378 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2074593 00:13:21.378 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.378 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.378 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2074593' 00:13:21.378 killing process with pid 2074593 00:13:21.378 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2074593 00:13:21.378 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2074593 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.944 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.848 00:13:23.848 real 0m7.014s 00:13:23.848 user 0m9.002s 00:13:23.848 sys 0m2.610s 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.848 ************************************ 00:13:23.848 END TEST nvmf_multitarget 00:13:23.848 ************************************ 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.848 ************************************ 00:13:23.848 START TEST nvmf_rpc 00:13:23.848 ************************************ 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:23.848 * Looking for test storage... 00:13:23.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:23.848 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:24.106 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.106 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.106 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.106 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.106 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.106 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.107 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:26.641 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:26.641 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.641 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:26.642 Found net devices under 0000:84:00.0: cvl_0_0 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:26.642 Found net devices under 0000:84:00.1: cvl_0_1 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:13:26.642 00:13:26.642 --- 10.0.0.2 ping statistics --- 00:13:26.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.642 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:13:26.642 00:13:26.642 --- 10.0.0.1 ping statistics --- 00:13:26.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.642 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2076831 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2076831 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2076831 ']' 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.642 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.901 [2024-07-26 11:21:22.312205] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:13:26.901 [2024-07-26 11:21:22.312303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.901 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.901 [2024-07-26 11:21:22.390276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.901 [2024-07-26 11:21:22.519198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.901 [2024-07-26 11:21:22.519256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.901 [2024-07-26 11:21:22.519283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.901 [2024-07-26 11:21:22.519315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.901 [2024-07-26 11:21:22.519334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.901 [2024-07-26 11:21:22.519402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.901 [2024-07-26 11:21:22.519462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.901 [2024-07-26 11:21:22.519494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.901 [2024-07-26 11:21:22.519501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:27.159 "tick_rate": 2700000000, 00:13:27.159 "poll_groups": [ 00:13:27.159 { 00:13:27.159 "name": "nvmf_tgt_poll_group_000", 00:13:27.159 "admin_qpairs": 0, 00:13:27.159 "io_qpairs": 0, 00:13:27.159 "current_admin_qpairs": 0, 00:13:27.159 "current_io_qpairs": 0, 00:13:27.159 "pending_bdev_io": 0, 00:13:27.159 "completed_nvme_io": 0, 00:13:27.159 "transports": [] 00:13:27.159 }, 00:13:27.159 { 00:13:27.159 "name": "nvmf_tgt_poll_group_001", 00:13:27.159 "admin_qpairs": 0, 00:13:27.159 "io_qpairs": 0, 00:13:27.159 "current_admin_qpairs": 0, 00:13:27.159 "current_io_qpairs": 0, 00:13:27.159 "pending_bdev_io": 0, 00:13:27.159 "completed_nvme_io": 0, 00:13:27.159 "transports": [] 00:13:27.159 }, 00:13:27.159 { 00:13:27.159 "name": "nvmf_tgt_poll_group_002", 00:13:27.159 "admin_qpairs": 0, 00:13:27.159 "io_qpairs": 0, 00:13:27.159 "current_admin_qpairs": 0, 00:13:27.159 "current_io_qpairs": 0, 00:13:27.159 "pending_bdev_io": 0, 00:13:27.159 "completed_nvme_io": 0, 00:13:27.159 "transports": [] 00:13:27.159 }, 00:13:27.159 { 00:13:27.159 "name": "nvmf_tgt_poll_group_003", 00:13:27.159 "admin_qpairs": 0, 00:13:27.159 "io_qpairs": 0, 00:13:27.159 "current_admin_qpairs": 0, 00:13:27.159 "current_io_qpairs": 0, 00:13:27.159 "pending_bdev_io": 0, 00:13:27.159 "completed_nvme_io": 0, 00:13:27.159 "transports": [] 00:13:27.159 } 00:13:27.159 ] 00:13:27.159 }' 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:27.159 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 [2024-07-26 11:21:22.825777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:27.420 "tick_rate": 2700000000, 00:13:27.420 "poll_groups": [ 00:13:27.420 { 00:13:27.420 "name": "nvmf_tgt_poll_group_000", 00:13:27.420 "admin_qpairs": 0, 00:13:27.420 "io_qpairs": 0, 00:13:27.420 "current_admin_qpairs": 0, 00:13:27.420 "current_io_qpairs": 0, 00:13:27.420 "pending_bdev_io": 0, 00:13:27.420 "completed_nvme_io": 0, 00:13:27.420 "transports": [ 00:13:27.420 { 00:13:27.420 "trtype": "TCP" 00:13:27.420 } 00:13:27.420 ] 00:13:27.420 }, 00:13:27.420 { 00:13:27.420 "name": "nvmf_tgt_poll_group_001", 00:13:27.420 "admin_qpairs": 0, 00:13:27.420 "io_qpairs": 0, 00:13:27.420 "current_admin_qpairs": 0, 00:13:27.420 "current_io_qpairs": 0, 00:13:27.420 "pending_bdev_io": 0, 00:13:27.420 "completed_nvme_io": 0, 00:13:27.420 "transports": [ 00:13:27.420 { 00:13:27.420 "trtype": "TCP" 00:13:27.420 } 00:13:27.420 ] 00:13:27.420 }, 00:13:27.420 { 00:13:27.420 "name": "nvmf_tgt_poll_group_002", 00:13:27.420 "admin_qpairs": 0, 00:13:27.420 "io_qpairs": 0, 00:13:27.420 "current_admin_qpairs": 0, 00:13:27.420 "current_io_qpairs": 0, 00:13:27.420 "pending_bdev_io": 0, 00:13:27.420 "completed_nvme_io": 0, 00:13:27.420 "transports": [ 00:13:27.420 { 00:13:27.420 "trtype": "TCP" 00:13:27.420 } 00:13:27.420 ] 00:13:27.420 }, 00:13:27.420 { 00:13:27.420 "name": "nvmf_tgt_poll_group_003", 00:13:27.420 "admin_qpairs": 0, 00:13:27.420 "io_qpairs": 0, 00:13:27.420 "current_admin_qpairs": 0, 00:13:27.420 "current_io_qpairs": 0, 00:13:27.420 "pending_bdev_io": 0, 00:13:27.420 "completed_nvme_io": 0, 00:13:27.420 "transports": [ 00:13:27.420 { 00:13:27.420 "trtype": "TCP" 00:13:27.420 } 00:13:27.420 ] 00:13:27.420 } 00:13:27.420 ] 00:13:27.420 }' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 Malloc1 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.420 [2024-07-26 11:21:23.033098] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:27.420 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:13:27.420 [2024-07-26 11:21:23.055707] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:27.713 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:27.713 could not add new controller: failed to write to nvme-fabrics device 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.279 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.279 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:28.279 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.279 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:28.279 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:30.178 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.436 [2024-07-26 11:21:25.875781] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:13:30.436 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:30.436 could not add new controller: failed to write to nvme-fabrics device 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.436 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.001 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.001 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.001 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.001 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:31.001 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:32.899 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 [2024-07-26 11:21:28.672627] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.157 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.723 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.723 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.723 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.723 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:33.723 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:36.250 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 [2024-07-26 11:21:31.464030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.251 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.509 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.509 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:36.509 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.509 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:36.509 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 [2024-07-26 11:21:34.324487] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.043 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.608 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.608 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:39.608 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.608 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:39.608 11:21:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.502 [2024-07-26 11:21:37.146385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.502 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.759 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.759 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.323 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.323 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:42.323 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.323 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:42.323 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:44.219 11:21:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.477 [2024-07-26 11:21:40.061578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.477 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.409 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.409 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.409 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.409 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:45.409 11:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.370 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [2024-07-26 11:21:42.883674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [2024-07-26 11:21:42.931701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [2024-07-26 11:21:42.979861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.371 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.371 [2024-07-26 11:21:43.028022] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 [2024-07-26 11:21:43.076210] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.629 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:47.630 "tick_rate": 2700000000, 00:13:47.630 "poll_groups": [ 00:13:47.630 { 00:13:47.630 "name": "nvmf_tgt_poll_group_000", 00:13:47.630 "admin_qpairs": 2, 00:13:47.630 "io_qpairs": 84, 00:13:47.630 "current_admin_qpairs": 0, 00:13:47.630 "current_io_qpairs": 0, 00:13:47.630 "pending_bdev_io": 0, 00:13:47.630 "completed_nvme_io": 183, 00:13:47.630 "transports": [ 00:13:47.630 { 00:13:47.630 "trtype": "TCP" 00:13:47.630 } 00:13:47.630 ] 00:13:47.630 }, 00:13:47.630 { 00:13:47.630 "name": "nvmf_tgt_poll_group_001", 00:13:47.630 "admin_qpairs": 2, 00:13:47.630 "io_qpairs": 84, 00:13:47.630 "current_admin_qpairs": 0, 00:13:47.630 "current_io_qpairs": 0, 00:13:47.630 "pending_bdev_io": 0, 00:13:47.630 "completed_nvme_io": 151, 00:13:47.630 "transports": [ 00:13:47.630 { 00:13:47.630 "trtype": "TCP" 00:13:47.630 } 00:13:47.630 ] 00:13:47.630 }, 00:13:47.630 { 00:13:47.630 "name": "nvmf_tgt_poll_group_002", 00:13:47.630 "admin_qpairs": 1, 00:13:47.630 "io_qpairs": 84, 00:13:47.630 "current_admin_qpairs": 0, 00:13:47.630 "current_io_qpairs": 0, 00:13:47.630 "pending_bdev_io": 0, 00:13:47.630 "completed_nvme_io": 185, 00:13:47.630 "transports": [ 00:13:47.630 { 00:13:47.630 "trtype": "TCP" 00:13:47.630 } 00:13:47.630 ] 00:13:47.630 }, 00:13:47.630 { 00:13:47.630 "name": "nvmf_tgt_poll_group_003", 00:13:47.630 "admin_qpairs": 2, 00:13:47.630 "io_qpairs": 84, 00:13:47.630 "current_admin_qpairs": 0, 00:13:47.630 "current_io_qpairs": 0, 00:13:47.630 "pending_bdev_io": 0, 00:13:47.630 "completed_nvme_io": 167, 00:13:47.630 "transports": [ 00:13:47.630 { 00:13:47.630 "trtype": "TCP" 00:13:47.630 } 00:13:47.630 ] 00:13:47.630 } 00:13:47.630 ] 00:13:47.630 }' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.630 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:47.630 rmmod nvme_tcp 00:13:47.630 rmmod nvme_fabrics 00:13:47.899 rmmod nvme_keyring 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2076831 ']' 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2076831 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2076831 ']' 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2076831 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2076831 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2076831' 00:13:47.899 killing process with pid 2076831 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2076831 00:13:47.899 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2076831 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.163 11:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.697 00:13:50.697 real 0m26.318s 00:13:50.697 user 1m24.310s 00:13:50.697 sys 0m4.392s 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.697 ************************************ 00:13:50.697 END TEST nvmf_rpc 00:13:50.697 ************************************ 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.697 ************************************ 00:13:50.697 START TEST nvmf_invalid 00:13:50.697 ************************************ 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:50.697 * Looking for test storage... 00:13:50.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.697 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.698 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.232 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:53.233 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:53.233 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:53.233 Found net devices under 0000:84:00.0: cvl_0_0 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:53.233 Found net devices under 0000:84:00.1: cvl_0_1 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:13:53.233 00:13:53.233 --- 10.0.0.2 ping statistics --- 00:13:53.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.233 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:13:53.233 00:13:53.233 --- 10.0.0.1 ping statistics --- 00:13:53.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.233 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2081452 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2081452 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2081452 ']' 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.233 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.233 [2024-07-26 11:21:48.587769] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:13:53.233 [2024-07-26 11:21:48.587876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.233 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.233 [2024-07-26 11:21:48.670504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.233 [2024-07-26 11:21:48.794804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.233 [2024-07-26 11:21:48.794863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.233 [2024-07-26 11:21:48.794890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.233 [2024-07-26 11:21:48.794911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.233 [2024-07-26 11:21:48.794930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.233 [2024-07-26 11:21:48.795008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.234 [2024-07-26 11:21:48.795067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.234 [2024-07-26 11:21:48.795128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.234 [2024-07-26 11:21:48.795120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:53.492 11:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19794 00:13:53.750 [2024-07-26 11:21:49.224822] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:53.750 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:53.750 { 00:13:53.750 "nqn": "nqn.2016-06.io.spdk:cnode19794", 00:13:53.750 "tgt_name": "foobar", 00:13:53.750 "method": "nvmf_create_subsystem", 00:13:53.750 "req_id": 1 00:13:53.750 } 00:13:53.750 Got JSON-RPC error response 00:13:53.750 response: 00:13:53.750 { 00:13:53.750 "code": -32603, 00:13:53.750 "message": "Unable to find target foobar" 00:13:53.750 }' 00:13:53.750 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:53.750 { 00:13:53.750 "nqn": "nqn.2016-06.io.spdk:cnode19794", 00:13:53.750 "tgt_name": "foobar", 00:13:53.750 "method": "nvmf_create_subsystem", 00:13:53.750 "req_id": 1 00:13:53.750 } 00:13:53.750 Got JSON-RPC error response 00:13:53.750 response: 00:13:53.750 { 00:13:53.750 "code": -32603, 00:13:53.750 "message": "Unable to find target foobar" 00:13:53.750 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:53.750 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:53.750 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15681 00:13:54.315 [2024-07-26 11:21:49.734571] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15681: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:54.315 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:54.315 { 00:13:54.315 "nqn": "nqn.2016-06.io.spdk:cnode15681", 00:13:54.315 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:54.315 "method": "nvmf_create_subsystem", 00:13:54.315 "req_id": 1 00:13:54.316 } 00:13:54.316 Got JSON-RPC error response 00:13:54.316 response: 00:13:54.316 { 00:13:54.316 "code": -32602, 00:13:54.316 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:54.316 }' 00:13:54.316 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:54.316 { 00:13:54.316 "nqn": "nqn.2016-06.io.spdk:cnode15681", 00:13:54.316 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:54.316 "method": "nvmf_create_subsystem", 00:13:54.316 "req_id": 1 00:13:54.316 } 00:13:54.316 Got JSON-RPC error response 00:13:54.316 response: 00:13:54.316 { 00:13:54.316 "code": -32602, 00:13:54.316 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:54.316 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:54.316 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:54.316 11:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23046 00:13:54.574 [2024-07-26 11:21:50.188163] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23046: invalid model number 'SPDK_Controller' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:54.574 { 00:13:54.574 "nqn": "nqn.2016-06.io.spdk:cnode23046", 00:13:54.574 "model_number": "SPDK_Controller\u001f", 00:13:54.574 "method": "nvmf_create_subsystem", 00:13:54.574 "req_id": 1 00:13:54.574 } 00:13:54.574 Got JSON-RPC error response 00:13:54.574 response: 00:13:54.574 { 00:13:54.574 "code": -32602, 00:13:54.574 "message": "Invalid MN SPDK_Controller\u001f" 00:13:54.574 }' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:54.574 { 00:13:54.574 "nqn": "nqn.2016-06.io.spdk:cnode23046", 00:13:54.574 "model_number": "SPDK_Controller\u001f", 00:13:54.574 "method": "nvmf_create_subsystem", 00:13:54.574 "req_id": 1 00:13:54.574 } 00:13:54.574 Got JSON-RPC error response 00:13:54.574 response: 00:13:54.574 { 00:13:54.574 "code": -32602, 00:13:54.574 "message": "Invalid MN SPDK_Controller\u001f" 00:13:54.574 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.574 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.832 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$neQu&FtUuNx=LDy1f7`y' 00:13:54.833 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '$neQu&FtUuNx=LDy1f7`y' nqn.2016-06.io.spdk:cnode27040 00:13:55.092 [2024-07-26 11:21:50.601469] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27040: invalid serial number '$neQu&FtUuNx=LDy1f7`y' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:55.092 { 00:13:55.092 "nqn": "nqn.2016-06.io.spdk:cnode27040", 00:13:55.092 "serial_number": "$neQu&FtUuNx=LDy1f7`y", 00:13:55.092 "method": "nvmf_create_subsystem", 00:13:55.092 "req_id": 1 00:13:55.092 } 00:13:55.092 Got JSON-RPC error response 00:13:55.092 response: 00:13:55.092 { 00:13:55.092 "code": -32602, 00:13:55.092 "message": "Invalid SN $neQu&FtUuNx=LDy1f7`y" 00:13:55.092 }' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:55.092 { 00:13:55.092 "nqn": "nqn.2016-06.io.spdk:cnode27040", 00:13:55.092 "serial_number": "$neQu&FtUuNx=LDy1f7`y", 00:13:55.092 "method": "nvmf_create_subsystem", 00:13:55.092 "req_id": 1 00:13:55.092 } 00:13:55.092 Got JSON-RPC error response 00:13:55.092 response: 00:13:55.092 { 00:13:55.092 "code": -32602, 00:13:55.092 "message": "Invalid SN $neQu&FtUuNx=LDy1f7`y" 00:13:55.092 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:55.092 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.093 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.094 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:13:55.352 11:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=~]Iu88pWv_F2D2_!g;d`daY;> /dev/null' 00:13:59.992 11:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.526 00:14:02.526 real 0m11.782s 00:14:02.526 user 0m32.643s 00:14:02.526 sys 0m3.189s 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.526 ************************************ 00:14:02.526 END TEST nvmf_invalid 00:14:02.526 ************************************ 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.526 ************************************ 00:14:02.526 START TEST nvmf_connect_stress 00:14:02.526 ************************************ 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.526 * Looking for test storage... 00:14:02.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.526 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.527 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.086 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:05.087 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:05.087 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:05.087 Found net devices under 0000:84:00.0: cvl_0_0 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:05.087 Found net devices under 0000:84:00.1: cvl_0_1 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:14:05.087 00:14:05.087 --- 10.0.0.2 ping statistics --- 00:14:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.087 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:14:05.087 00:14:05.087 --- 10.0.0.1 ping statistics --- 00:14:05.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.087 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:05.087 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2084370 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2084370 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2084370 ']' 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.088 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.088 [2024-07-26 11:22:00.542856] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:14:05.088 [2024-07-26 11:22:00.542961] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.088 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.088 [2024-07-26 11:22:00.632025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.356 [2024-07-26 11:22:00.776506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.356 [2024-07-26 11:22:00.776569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.356 [2024-07-26 11:22:00.776590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.356 [2024-07-26 11:22:00.776606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.356 [2024-07-26 11:22:00.776629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.356 [2024-07-26 11:22:00.778485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.356 [2024-07-26 11:22:00.778532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.356 [2024-07-26 11:22:00.778537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.356 [2024-07-26 11:22:00.948201] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.356 [2024-07-26 11:22:00.984285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.356 NULL1 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2084513 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:05.356 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.356 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.614 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.872 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.872 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:05.872 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.872 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.872 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.130 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.130 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:06.130 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.130 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.130 11:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.388 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.388 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:06.388 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.388 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.388 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.953 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.953 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:06.953 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.953 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.953 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.210 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.210 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:07.210 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.210 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.210 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.517 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.517 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:07.517 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.517 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.517 11:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.775 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.775 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:07.775 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.775 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.775 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.032 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.032 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:08.032 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.032 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.032 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.289 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.547 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:08.547 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.547 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.547 11:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.805 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.805 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:08.805 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.805 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.805 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.062 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.063 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:09.063 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.063 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.063 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.320 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.320 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:09.320 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.320 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.320 11:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.578 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.578 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:09.578 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.578 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.578 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.144 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.144 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:10.144 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.144 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.144 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.401 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.401 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:10.402 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.402 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.402 11:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.659 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.659 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:10.659 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.659 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.659 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.918 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.918 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:10.918 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.918 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.918 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.485 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.485 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:11.485 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.485 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.485 11:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.743 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.743 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:11.743 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.743 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.743 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.000 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.000 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:12.000 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.000 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.000 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.258 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.258 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:12.258 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.258 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.258 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.517 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.517 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:12.517 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.517 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.517 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.082 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.082 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:13.082 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.082 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.082 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.340 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.340 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:13.340 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.340 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.340 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.597 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.597 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:13.597 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.597 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.597 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.855 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.855 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:13.855 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.855 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.855 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.113 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.113 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:14.113 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.113 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.113 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.680 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.680 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:14.680 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.680 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.680 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.937 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.937 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:14.937 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.937 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.937 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.195 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.195 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:15.195 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.195 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.195 11:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.453 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.453 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:15.453 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.453 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.453 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.453 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2084513 00:14:15.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2084513) - No such process 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2084513 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.711 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.711 rmmod nvme_tcp 00:14:15.711 rmmod nvme_fabrics 00:14:15.969 rmmod nvme_keyring 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2084370 ']' 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2084370 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2084370 ']' 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2084370 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2084370 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2084370' 00:14:15.969 killing process with pid 2084370 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2084370 00:14:15.969 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2084370 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.229 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.157 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.157 00:14:18.157 real 0m16.175s 00:14:18.157 user 0m38.626s 00:14:18.157 sys 0m6.731s 00:14:18.157 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.157 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.157 ************************************ 00:14:18.157 END TEST nvmf_connect_stress 00:14:18.157 ************************************ 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.416 ************************************ 00:14:18.416 START TEST nvmf_fused_ordering 00:14:18.416 ************************************ 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:18.416 * Looking for test storage... 00:14:18.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.416 11:22:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:20.947 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:20.947 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:20.947 Found net devices under 0000:84:00.0: cvl_0_0 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:20.947 Found net devices under 0000:84:00.1: cvl_0_1 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.947 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.206 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:14:21.207 00:14:21.207 --- 10.0.0.2 ping statistics --- 00:14:21.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.207 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:14:21.207 00:14:21.207 --- 10.0.0.1 ping statistics --- 00:14:21.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.207 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2087680 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2087680 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2087680 ']' 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.207 11:22:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.207 [2024-07-26 11:22:16.839650] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:14:21.207 [2024-07-26 11:22:16.839758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.466 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.466 [2024-07-26 11:22:16.922400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.466 [2024-07-26 11:22:17.064422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.466 [2024-07-26 11:22:17.064508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.466 [2024-07-26 11:22:17.064526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.466 [2024-07-26 11:22:17.064547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.466 [2024-07-26 11:22:17.064559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.466 [2024-07-26 11:22:17.064597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 [2024-07-26 11:22:17.240714] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 [2024-07-26 11:22:17.256962] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 NULL1 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 11:22:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:21.724 [2024-07-26 11:22:17.305716] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:14:21.724 [2024-07-26 11:22:17.305771] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087821 ] 00:14:21.724 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.289 Attached to nqn.2016-06.io.spdk:cnode1 00:14:22.289 Namespace ID: 1 size: 1GB 00:14:22.289 fused_ordering(0) 00:14:22.289 fused_ordering(1) 00:14:22.289 fused_ordering(2) 00:14:22.289 fused_ordering(3) 00:14:22.289 fused_ordering(4) 00:14:22.289 fused_ordering(5) 00:14:22.289 fused_ordering(6) 00:14:22.289 fused_ordering(7) 00:14:22.289 fused_ordering(8) 00:14:22.289 fused_ordering(9) 00:14:22.290 fused_ordering(10) 00:14:22.290 fused_ordering(11) 00:14:22.290 fused_ordering(12) 00:14:22.290 fused_ordering(13) 00:14:22.290 fused_ordering(14) 00:14:22.290 fused_ordering(15) 00:14:22.290 fused_ordering(16) 00:14:22.290 fused_ordering(17) 00:14:22.290 fused_ordering(18) 00:14:22.290 fused_ordering(19) 00:14:22.290 fused_ordering(20) 00:14:22.290 fused_ordering(21) 00:14:22.290 fused_ordering(22) 00:14:22.290 fused_ordering(23) 00:14:22.290 fused_ordering(24) 00:14:22.290 fused_ordering(25) 00:14:22.290 fused_ordering(26) 00:14:22.290 fused_ordering(27) 00:14:22.290 fused_ordering(28) 00:14:22.290 fused_ordering(29) 00:14:22.290 fused_ordering(30) 00:14:22.290 fused_ordering(31) 00:14:22.290 fused_ordering(32) 00:14:22.290 fused_ordering(33) 00:14:22.290 fused_ordering(34) 00:14:22.290 fused_ordering(35) 00:14:22.290 fused_ordering(36) 00:14:22.290 fused_ordering(37) 00:14:22.290 fused_ordering(38) 00:14:22.290 fused_ordering(39) 00:14:22.290 fused_ordering(40) 00:14:22.290 fused_ordering(41) 00:14:22.290 fused_ordering(42) 00:14:22.290 fused_ordering(43) 00:14:22.290 fused_ordering(44) 00:14:22.290 fused_ordering(45) 00:14:22.290 fused_ordering(46) 00:14:22.290 fused_ordering(47) 00:14:22.290 fused_ordering(48) 00:14:22.290 fused_ordering(49) 00:14:22.290 fused_ordering(50) 00:14:22.290 fused_ordering(51) 00:14:22.290 fused_ordering(52) 00:14:22.290 fused_ordering(53) 00:14:22.290 fused_ordering(54) 00:14:22.290 fused_ordering(55) 00:14:22.290 fused_ordering(56) 00:14:22.290 fused_ordering(57) 00:14:22.290 fused_ordering(58) 00:14:22.290 fused_ordering(59) 00:14:22.290 fused_ordering(60) 00:14:22.290 fused_ordering(61) 00:14:22.290 fused_ordering(62) 00:14:22.290 fused_ordering(63) 00:14:22.290 fused_ordering(64) 00:14:22.290 fused_ordering(65) 00:14:22.290 fused_ordering(66) 00:14:22.290 fused_ordering(67) 00:14:22.290 fused_ordering(68) 00:14:22.290 fused_ordering(69) 00:14:22.290 fused_ordering(70) 00:14:22.290 fused_ordering(71) 00:14:22.290 fused_ordering(72) 00:14:22.290 fused_ordering(73) 00:14:22.290 fused_ordering(74) 00:14:22.290 fused_ordering(75) 00:14:22.290 fused_ordering(76) 00:14:22.290 fused_ordering(77) 00:14:22.290 fused_ordering(78) 00:14:22.290 fused_ordering(79) 00:14:22.290 fused_ordering(80) 00:14:22.290 fused_ordering(81) 00:14:22.290 fused_ordering(82) 00:14:22.290 fused_ordering(83) 00:14:22.290 fused_ordering(84) 00:14:22.290 fused_ordering(85) 00:14:22.290 fused_ordering(86) 00:14:22.290 fused_ordering(87) 00:14:22.290 fused_ordering(88) 00:14:22.290 fused_ordering(89) 00:14:22.290 fused_ordering(90) 00:14:22.290 fused_ordering(91) 00:14:22.290 fused_ordering(92) 00:14:22.290 fused_ordering(93) 00:14:22.290 fused_ordering(94) 00:14:22.290 fused_ordering(95) 00:14:22.290 fused_ordering(96) 00:14:22.290 fused_ordering(97) 00:14:22.290 fused_ordering(98) 00:14:22.290 fused_ordering(99) 00:14:22.290 fused_ordering(100) 00:14:22.290 fused_ordering(101) 00:14:22.290 fused_ordering(102) 00:14:22.290 fused_ordering(103) 00:14:22.290 fused_ordering(104) 00:14:22.290 fused_ordering(105) 00:14:22.290 fused_ordering(106) 00:14:22.290 fused_ordering(107) 00:14:22.290 fused_ordering(108) 00:14:22.290 fused_ordering(109) 00:14:22.290 fused_ordering(110) 00:14:22.290 fused_ordering(111) 00:14:22.290 fused_ordering(112) 00:14:22.290 fused_ordering(113) 00:14:22.290 fused_ordering(114) 00:14:22.290 fused_ordering(115) 00:14:22.290 fused_ordering(116) 00:14:22.290 fused_ordering(117) 00:14:22.290 fused_ordering(118) 00:14:22.290 fused_ordering(119) 00:14:22.290 fused_ordering(120) 00:14:22.290 fused_ordering(121) 00:14:22.290 fused_ordering(122) 00:14:22.290 fused_ordering(123) 00:14:22.290 fused_ordering(124) 00:14:22.290 fused_ordering(125) 00:14:22.290 fused_ordering(126) 00:14:22.290 fused_ordering(127) 00:14:22.290 fused_ordering(128) 00:14:22.290 fused_ordering(129) 00:14:22.290 fused_ordering(130) 00:14:22.290 fused_ordering(131) 00:14:22.290 fused_ordering(132) 00:14:22.290 fused_ordering(133) 00:14:22.290 fused_ordering(134) 00:14:22.290 fused_ordering(135) 00:14:22.290 fused_ordering(136) 00:14:22.290 fused_ordering(137) 00:14:22.290 fused_ordering(138) 00:14:22.290 fused_ordering(139) 00:14:22.290 fused_ordering(140) 00:14:22.290 fused_ordering(141) 00:14:22.290 fused_ordering(142) 00:14:22.290 fused_ordering(143) 00:14:22.290 fused_ordering(144) 00:14:22.290 fused_ordering(145) 00:14:22.290 fused_ordering(146) 00:14:22.290 fused_ordering(147) 00:14:22.290 fused_ordering(148) 00:14:22.290 fused_ordering(149) 00:14:22.290 fused_ordering(150) 00:14:22.290 fused_ordering(151) 00:14:22.290 fused_ordering(152) 00:14:22.290 fused_ordering(153) 00:14:22.290 fused_ordering(154) 00:14:22.290 fused_ordering(155) 00:14:22.290 fused_ordering(156) 00:14:22.290 fused_ordering(157) 00:14:22.290 fused_ordering(158) 00:14:22.290 fused_ordering(159) 00:14:22.290 fused_ordering(160) 00:14:22.290 fused_ordering(161) 00:14:22.290 fused_ordering(162) 00:14:22.290 fused_ordering(163) 00:14:22.290 fused_ordering(164) 00:14:22.290 fused_ordering(165) 00:14:22.290 fused_ordering(166) 00:14:22.290 fused_ordering(167) 00:14:22.290 fused_ordering(168) 00:14:22.290 fused_ordering(169) 00:14:22.290 fused_ordering(170) 00:14:22.290 fused_ordering(171) 00:14:22.290 fused_ordering(172) 00:14:22.290 fused_ordering(173) 00:14:22.290 fused_ordering(174) 00:14:22.290 fused_ordering(175) 00:14:22.290 fused_ordering(176) 00:14:22.290 fused_ordering(177) 00:14:22.290 fused_ordering(178) 00:14:22.290 fused_ordering(179) 00:14:22.290 fused_ordering(180) 00:14:22.290 fused_ordering(181) 00:14:22.290 fused_ordering(182) 00:14:22.290 fused_ordering(183) 00:14:22.290 fused_ordering(184) 00:14:22.290 fused_ordering(185) 00:14:22.290 fused_ordering(186) 00:14:22.290 fused_ordering(187) 00:14:22.290 fused_ordering(188) 00:14:22.290 fused_ordering(189) 00:14:22.290 fused_ordering(190) 00:14:22.290 fused_ordering(191) 00:14:22.290 fused_ordering(192) 00:14:22.290 fused_ordering(193) 00:14:22.290 fused_ordering(194) 00:14:22.290 fused_ordering(195) 00:14:22.290 fused_ordering(196) 00:14:22.290 fused_ordering(197) 00:14:22.290 fused_ordering(198) 00:14:22.290 fused_ordering(199) 00:14:22.290 fused_ordering(200) 00:14:22.290 fused_ordering(201) 00:14:22.290 fused_ordering(202) 00:14:22.290 fused_ordering(203) 00:14:22.290 fused_ordering(204) 00:14:22.290 fused_ordering(205) 00:14:22.856 fused_ordering(206) 00:14:22.856 fused_ordering(207) 00:14:22.856 fused_ordering(208) 00:14:22.856 fused_ordering(209) 00:14:22.856 fused_ordering(210) 00:14:22.856 fused_ordering(211) 00:14:22.856 fused_ordering(212) 00:14:22.856 fused_ordering(213) 00:14:22.856 fused_ordering(214) 00:14:22.856 fused_ordering(215) 00:14:22.856 fused_ordering(216) 00:14:22.856 fused_ordering(217) 00:14:22.856 fused_ordering(218) 00:14:22.856 fused_ordering(219) 00:14:22.856 fused_ordering(220) 00:14:22.856 fused_ordering(221) 00:14:22.856 fused_ordering(222) 00:14:22.856 fused_ordering(223) 00:14:22.856 fused_ordering(224) 00:14:22.856 fused_ordering(225) 00:14:22.856 fused_ordering(226) 00:14:22.856 fused_ordering(227) 00:14:22.856 fused_ordering(228) 00:14:22.856 fused_ordering(229) 00:14:22.856 fused_ordering(230) 00:14:22.856 fused_ordering(231) 00:14:22.856 fused_ordering(232) 00:14:22.856 fused_ordering(233) 00:14:22.856 fused_ordering(234) 00:14:22.856 fused_ordering(235) 00:14:22.856 fused_ordering(236) 00:14:22.856 fused_ordering(237) 00:14:22.856 fused_ordering(238) 00:14:22.856 fused_ordering(239) 00:14:22.856 fused_ordering(240) 00:14:22.856 fused_ordering(241) 00:14:22.856 fused_ordering(242) 00:14:22.856 fused_ordering(243) 00:14:22.856 fused_ordering(244) 00:14:22.856 fused_ordering(245) 00:14:22.856 fused_ordering(246) 00:14:22.856 fused_ordering(247) 00:14:22.856 fused_ordering(248) 00:14:22.856 fused_ordering(249) 00:14:22.856 fused_ordering(250) 00:14:22.856 fused_ordering(251) 00:14:22.856 fused_ordering(252) 00:14:22.856 fused_ordering(253) 00:14:22.856 fused_ordering(254) 00:14:22.856 fused_ordering(255) 00:14:22.856 fused_ordering(256) 00:14:22.856 fused_ordering(257) 00:14:22.856 fused_ordering(258) 00:14:22.856 fused_ordering(259) 00:14:22.856 fused_ordering(260) 00:14:22.856 fused_ordering(261) 00:14:22.856 fused_ordering(262) 00:14:22.856 fused_ordering(263) 00:14:22.856 fused_ordering(264) 00:14:22.856 fused_ordering(265) 00:14:22.856 fused_ordering(266) 00:14:22.856 fused_ordering(267) 00:14:22.856 fused_ordering(268) 00:14:22.856 fused_ordering(269) 00:14:22.856 fused_ordering(270) 00:14:22.856 fused_ordering(271) 00:14:22.856 fused_ordering(272) 00:14:22.856 fused_ordering(273) 00:14:22.856 fused_ordering(274) 00:14:22.856 fused_ordering(275) 00:14:22.856 fused_ordering(276) 00:14:22.856 fused_ordering(277) 00:14:22.856 fused_ordering(278) 00:14:22.856 fused_ordering(279) 00:14:22.856 fused_ordering(280) 00:14:22.856 fused_ordering(281) 00:14:22.856 fused_ordering(282) 00:14:22.856 fused_ordering(283) 00:14:22.856 fused_ordering(284) 00:14:22.856 fused_ordering(285) 00:14:22.856 fused_ordering(286) 00:14:22.856 fused_ordering(287) 00:14:22.856 fused_ordering(288) 00:14:22.856 fused_ordering(289) 00:14:22.856 fused_ordering(290) 00:14:22.856 fused_ordering(291) 00:14:22.856 fused_ordering(292) 00:14:22.856 fused_ordering(293) 00:14:22.856 fused_ordering(294) 00:14:22.856 fused_ordering(295) 00:14:22.856 fused_ordering(296) 00:14:22.856 fused_ordering(297) 00:14:22.856 fused_ordering(298) 00:14:22.856 fused_ordering(299) 00:14:22.856 fused_ordering(300) 00:14:22.856 fused_ordering(301) 00:14:22.856 fused_ordering(302) 00:14:22.856 fused_ordering(303) 00:14:22.856 fused_ordering(304) 00:14:22.856 fused_ordering(305) 00:14:22.856 fused_ordering(306) 00:14:22.856 fused_ordering(307) 00:14:22.856 fused_ordering(308) 00:14:22.856 fused_ordering(309) 00:14:22.856 fused_ordering(310) 00:14:22.856 fused_ordering(311) 00:14:22.856 fused_ordering(312) 00:14:22.856 fused_ordering(313) 00:14:22.856 fused_ordering(314) 00:14:22.856 fused_ordering(315) 00:14:22.856 fused_ordering(316) 00:14:22.856 fused_ordering(317) 00:14:22.856 fused_ordering(318) 00:14:22.856 fused_ordering(319) 00:14:22.856 fused_ordering(320) 00:14:22.856 fused_ordering(321) 00:14:22.856 fused_ordering(322) 00:14:22.856 fused_ordering(323) 00:14:22.856 fused_ordering(324) 00:14:22.856 fused_ordering(325) 00:14:22.856 fused_ordering(326) 00:14:22.856 fused_ordering(327) 00:14:22.856 fused_ordering(328) 00:14:22.856 fused_ordering(329) 00:14:22.856 fused_ordering(330) 00:14:22.856 fused_ordering(331) 00:14:22.856 fused_ordering(332) 00:14:22.857 fused_ordering(333) 00:14:22.857 fused_ordering(334) 00:14:22.857 fused_ordering(335) 00:14:22.857 fused_ordering(336) 00:14:22.857 fused_ordering(337) 00:14:22.857 fused_ordering(338) 00:14:22.857 fused_ordering(339) 00:14:22.857 fused_ordering(340) 00:14:22.857 fused_ordering(341) 00:14:22.857 fused_ordering(342) 00:14:22.857 fused_ordering(343) 00:14:22.857 fused_ordering(344) 00:14:22.857 fused_ordering(345) 00:14:22.857 fused_ordering(346) 00:14:22.857 fused_ordering(347) 00:14:22.857 fused_ordering(348) 00:14:22.857 fused_ordering(349) 00:14:22.857 fused_ordering(350) 00:14:22.857 fused_ordering(351) 00:14:22.857 fused_ordering(352) 00:14:22.857 fused_ordering(353) 00:14:22.857 fused_ordering(354) 00:14:22.857 fused_ordering(355) 00:14:22.857 fused_ordering(356) 00:14:22.857 fused_ordering(357) 00:14:22.857 fused_ordering(358) 00:14:22.857 fused_ordering(359) 00:14:22.857 fused_ordering(360) 00:14:22.857 fused_ordering(361) 00:14:22.857 fused_ordering(362) 00:14:22.857 fused_ordering(363) 00:14:22.857 fused_ordering(364) 00:14:22.857 fused_ordering(365) 00:14:22.857 fused_ordering(366) 00:14:22.857 fused_ordering(367) 00:14:22.857 fused_ordering(368) 00:14:22.857 fused_ordering(369) 00:14:22.857 fused_ordering(370) 00:14:22.857 fused_ordering(371) 00:14:22.857 fused_ordering(372) 00:14:22.857 fused_ordering(373) 00:14:22.857 fused_ordering(374) 00:14:22.857 fused_ordering(375) 00:14:22.857 fused_ordering(376) 00:14:22.857 fused_ordering(377) 00:14:22.857 fused_ordering(378) 00:14:22.857 fused_ordering(379) 00:14:22.857 fused_ordering(380) 00:14:22.857 fused_ordering(381) 00:14:22.857 fused_ordering(382) 00:14:22.857 fused_ordering(383) 00:14:22.857 fused_ordering(384) 00:14:22.857 fused_ordering(385) 00:14:22.857 fused_ordering(386) 00:14:22.857 fused_ordering(387) 00:14:22.857 fused_ordering(388) 00:14:22.857 fused_ordering(389) 00:14:22.857 fused_ordering(390) 00:14:22.857 fused_ordering(391) 00:14:22.857 fused_ordering(392) 00:14:22.857 fused_ordering(393) 00:14:22.857 fused_ordering(394) 00:14:22.857 fused_ordering(395) 00:14:22.857 fused_ordering(396) 00:14:22.857 fused_ordering(397) 00:14:22.857 fused_ordering(398) 00:14:22.857 fused_ordering(399) 00:14:22.857 fused_ordering(400) 00:14:22.857 fused_ordering(401) 00:14:22.857 fused_ordering(402) 00:14:22.857 fused_ordering(403) 00:14:22.857 fused_ordering(404) 00:14:22.857 fused_ordering(405) 00:14:22.857 fused_ordering(406) 00:14:22.857 fused_ordering(407) 00:14:22.857 fused_ordering(408) 00:14:22.857 fused_ordering(409) 00:14:22.857 fused_ordering(410) 00:14:23.422 fused_ordering(411) 00:14:23.422 fused_ordering(412) 00:14:23.422 fused_ordering(413) 00:14:23.422 fused_ordering(414) 00:14:23.422 fused_ordering(415) 00:14:23.422 fused_ordering(416) 00:14:23.422 fused_ordering(417) 00:14:23.422 fused_ordering(418) 00:14:23.422 fused_ordering(419) 00:14:23.422 fused_ordering(420) 00:14:23.422 fused_ordering(421) 00:14:23.422 fused_ordering(422) 00:14:23.422 fused_ordering(423) 00:14:23.422 fused_ordering(424) 00:14:23.422 fused_ordering(425) 00:14:23.422 fused_ordering(426) 00:14:23.422 fused_ordering(427) 00:14:23.422 fused_ordering(428) 00:14:23.422 fused_ordering(429) 00:14:23.422 fused_ordering(430) 00:14:23.422 fused_ordering(431) 00:14:23.422 fused_ordering(432) 00:14:23.422 fused_ordering(433) 00:14:23.422 fused_ordering(434) 00:14:23.422 fused_ordering(435) 00:14:23.422 fused_ordering(436) 00:14:23.422 fused_ordering(437) 00:14:23.422 fused_ordering(438) 00:14:23.422 fused_ordering(439) 00:14:23.422 fused_ordering(440) 00:14:23.422 fused_ordering(441) 00:14:23.422 fused_ordering(442) 00:14:23.422 fused_ordering(443) 00:14:23.422 fused_ordering(444) 00:14:23.422 fused_ordering(445) 00:14:23.422 fused_ordering(446) 00:14:23.422 fused_ordering(447) 00:14:23.422 fused_ordering(448) 00:14:23.422 fused_ordering(449) 00:14:23.422 fused_ordering(450) 00:14:23.422 fused_ordering(451) 00:14:23.422 fused_ordering(452) 00:14:23.422 fused_ordering(453) 00:14:23.422 fused_ordering(454) 00:14:23.422 fused_ordering(455) 00:14:23.422 fused_ordering(456) 00:14:23.422 fused_ordering(457) 00:14:23.422 fused_ordering(458) 00:14:23.422 fused_ordering(459) 00:14:23.422 fused_ordering(460) 00:14:23.422 fused_ordering(461) 00:14:23.422 fused_ordering(462) 00:14:23.422 fused_ordering(463) 00:14:23.422 fused_ordering(464) 00:14:23.422 fused_ordering(465) 00:14:23.422 fused_ordering(466) 00:14:23.422 fused_ordering(467) 00:14:23.422 fused_ordering(468) 00:14:23.422 fused_ordering(469) 00:14:23.422 fused_ordering(470) 00:14:23.422 fused_ordering(471) 00:14:23.422 fused_ordering(472) 00:14:23.422 fused_ordering(473) 00:14:23.422 fused_ordering(474) 00:14:23.422 fused_ordering(475) 00:14:23.422 fused_ordering(476) 00:14:23.422 fused_ordering(477) 00:14:23.422 fused_ordering(478) 00:14:23.422 fused_ordering(479) 00:14:23.422 fused_ordering(480) 00:14:23.422 fused_ordering(481) 00:14:23.422 fused_ordering(482) 00:14:23.422 fused_ordering(483) 00:14:23.422 fused_ordering(484) 00:14:23.422 fused_ordering(485) 00:14:23.422 fused_ordering(486) 00:14:23.422 fused_ordering(487) 00:14:23.422 fused_ordering(488) 00:14:23.422 fused_ordering(489) 00:14:23.422 fused_ordering(490) 00:14:23.422 fused_ordering(491) 00:14:23.422 fused_ordering(492) 00:14:23.423 fused_ordering(493) 00:14:23.423 fused_ordering(494) 00:14:23.423 fused_ordering(495) 00:14:23.423 fused_ordering(496) 00:14:23.423 fused_ordering(497) 00:14:23.423 fused_ordering(498) 00:14:23.423 fused_ordering(499) 00:14:23.423 fused_ordering(500) 00:14:23.423 fused_ordering(501) 00:14:23.423 fused_ordering(502) 00:14:23.423 fused_ordering(503) 00:14:23.423 fused_ordering(504) 00:14:23.423 fused_ordering(505) 00:14:23.423 fused_ordering(506) 00:14:23.423 fused_ordering(507) 00:14:23.423 fused_ordering(508) 00:14:23.423 fused_ordering(509) 00:14:23.423 fused_ordering(510) 00:14:23.423 fused_ordering(511) 00:14:23.423 fused_ordering(512) 00:14:23.423 fused_ordering(513) 00:14:23.423 fused_ordering(514) 00:14:23.423 fused_ordering(515) 00:14:23.423 fused_ordering(516) 00:14:23.423 fused_ordering(517) 00:14:23.423 fused_ordering(518) 00:14:23.423 fused_ordering(519) 00:14:23.423 fused_ordering(520) 00:14:23.423 fused_ordering(521) 00:14:23.423 fused_ordering(522) 00:14:23.423 fused_ordering(523) 00:14:23.423 fused_ordering(524) 00:14:23.423 fused_ordering(525) 00:14:23.423 fused_ordering(526) 00:14:23.423 fused_ordering(527) 00:14:23.423 fused_ordering(528) 00:14:23.423 fused_ordering(529) 00:14:23.423 fused_ordering(530) 00:14:23.423 fused_ordering(531) 00:14:23.423 fused_ordering(532) 00:14:23.423 fused_ordering(533) 00:14:23.423 fused_ordering(534) 00:14:23.423 fused_ordering(535) 00:14:23.423 fused_ordering(536) 00:14:23.423 fused_ordering(537) 00:14:23.423 fused_ordering(538) 00:14:23.423 fused_ordering(539) 00:14:23.423 fused_ordering(540) 00:14:23.423 fused_ordering(541) 00:14:23.423 fused_ordering(542) 00:14:23.423 fused_ordering(543) 00:14:23.423 fused_ordering(544) 00:14:23.423 fused_ordering(545) 00:14:23.423 fused_ordering(546) 00:14:23.423 fused_ordering(547) 00:14:23.423 fused_ordering(548) 00:14:23.423 fused_ordering(549) 00:14:23.423 fused_ordering(550) 00:14:23.423 fused_ordering(551) 00:14:23.423 fused_ordering(552) 00:14:23.423 fused_ordering(553) 00:14:23.423 fused_ordering(554) 00:14:23.423 fused_ordering(555) 00:14:23.423 fused_ordering(556) 00:14:23.423 fused_ordering(557) 00:14:23.423 fused_ordering(558) 00:14:23.423 fused_ordering(559) 00:14:23.423 fused_ordering(560) 00:14:23.423 fused_ordering(561) 00:14:23.423 fused_ordering(562) 00:14:23.423 fused_ordering(563) 00:14:23.423 fused_ordering(564) 00:14:23.423 fused_ordering(565) 00:14:23.423 fused_ordering(566) 00:14:23.423 fused_ordering(567) 00:14:23.423 fused_ordering(568) 00:14:23.423 fused_ordering(569) 00:14:23.423 fused_ordering(570) 00:14:23.423 fused_ordering(571) 00:14:23.423 fused_ordering(572) 00:14:23.423 fused_ordering(573) 00:14:23.423 fused_ordering(574) 00:14:23.423 fused_ordering(575) 00:14:23.423 fused_ordering(576) 00:14:23.423 fused_ordering(577) 00:14:23.423 fused_ordering(578) 00:14:23.423 fused_ordering(579) 00:14:23.423 fused_ordering(580) 00:14:23.423 fused_ordering(581) 00:14:23.423 fused_ordering(582) 00:14:23.423 fused_ordering(583) 00:14:23.423 fused_ordering(584) 00:14:23.423 fused_ordering(585) 00:14:23.423 fused_ordering(586) 00:14:23.423 fused_ordering(587) 00:14:23.423 fused_ordering(588) 00:14:23.423 fused_ordering(589) 00:14:23.423 fused_ordering(590) 00:14:23.423 fused_ordering(591) 00:14:23.423 fused_ordering(592) 00:14:23.423 fused_ordering(593) 00:14:23.423 fused_ordering(594) 00:14:23.423 fused_ordering(595) 00:14:23.423 fused_ordering(596) 00:14:23.423 fused_ordering(597) 00:14:23.423 fused_ordering(598) 00:14:23.423 fused_ordering(599) 00:14:23.423 fused_ordering(600) 00:14:23.423 fused_ordering(601) 00:14:23.423 fused_ordering(602) 00:14:23.423 fused_ordering(603) 00:14:23.423 fused_ordering(604) 00:14:23.423 fused_ordering(605) 00:14:23.423 fused_ordering(606) 00:14:23.423 fused_ordering(607) 00:14:23.423 fused_ordering(608) 00:14:23.423 fused_ordering(609) 00:14:23.423 fused_ordering(610) 00:14:23.423 fused_ordering(611) 00:14:23.423 fused_ordering(612) 00:14:23.423 fused_ordering(613) 00:14:23.423 fused_ordering(614) 00:14:23.423 fused_ordering(615) 00:14:23.988 fused_ordering(616) 00:14:23.988 fused_ordering(617) 00:14:23.988 fused_ordering(618) 00:14:23.988 fused_ordering(619) 00:14:23.988 fused_ordering(620) 00:14:23.988 fused_ordering(621) 00:14:23.988 fused_ordering(622) 00:14:23.988 fused_ordering(623) 00:14:23.988 fused_ordering(624) 00:14:23.988 fused_ordering(625) 00:14:23.988 fused_ordering(626) 00:14:23.988 fused_ordering(627) 00:14:23.988 fused_ordering(628) 00:14:23.988 fused_ordering(629) 00:14:23.988 fused_ordering(630) 00:14:23.988 fused_ordering(631) 00:14:23.988 fused_ordering(632) 00:14:23.988 fused_ordering(633) 00:14:23.988 fused_ordering(634) 00:14:23.988 fused_ordering(635) 00:14:23.988 fused_ordering(636) 00:14:23.988 fused_ordering(637) 00:14:23.988 fused_ordering(638) 00:14:23.988 fused_ordering(639) 00:14:23.988 fused_ordering(640) 00:14:23.988 fused_ordering(641) 00:14:23.988 fused_ordering(642) 00:14:23.988 fused_ordering(643) 00:14:23.988 fused_ordering(644) 00:14:23.988 fused_ordering(645) 00:14:23.988 fused_ordering(646) 00:14:23.988 fused_ordering(647) 00:14:23.988 fused_ordering(648) 00:14:23.988 fused_ordering(649) 00:14:23.988 fused_ordering(650) 00:14:23.988 fused_ordering(651) 00:14:23.988 fused_ordering(652) 00:14:23.988 fused_ordering(653) 00:14:23.988 fused_ordering(654) 00:14:23.988 fused_ordering(655) 00:14:23.988 fused_ordering(656) 00:14:23.988 fused_ordering(657) 00:14:23.988 fused_ordering(658) 00:14:23.988 fused_ordering(659) 00:14:23.988 fused_ordering(660) 00:14:23.988 fused_ordering(661) 00:14:23.988 fused_ordering(662) 00:14:23.988 fused_ordering(663) 00:14:23.988 fused_ordering(664) 00:14:23.988 fused_ordering(665) 00:14:23.988 fused_ordering(666) 00:14:23.988 fused_ordering(667) 00:14:23.988 fused_ordering(668) 00:14:23.988 fused_ordering(669) 00:14:23.988 fused_ordering(670) 00:14:23.988 fused_ordering(671) 00:14:23.988 fused_ordering(672) 00:14:23.988 fused_ordering(673) 00:14:23.988 fused_ordering(674) 00:14:23.988 fused_ordering(675) 00:14:23.988 fused_ordering(676) 00:14:23.988 fused_ordering(677) 00:14:23.988 fused_ordering(678) 00:14:23.988 fused_ordering(679) 00:14:23.988 fused_ordering(680) 00:14:23.988 fused_ordering(681) 00:14:23.988 fused_ordering(682) 00:14:23.988 fused_ordering(683) 00:14:23.988 fused_ordering(684) 00:14:23.988 fused_ordering(685) 00:14:23.988 fused_ordering(686) 00:14:23.988 fused_ordering(687) 00:14:23.988 fused_ordering(688) 00:14:23.988 fused_ordering(689) 00:14:23.988 fused_ordering(690) 00:14:23.988 fused_ordering(691) 00:14:23.988 fused_ordering(692) 00:14:23.988 fused_ordering(693) 00:14:23.988 fused_ordering(694) 00:14:23.988 fused_ordering(695) 00:14:23.988 fused_ordering(696) 00:14:23.988 fused_ordering(697) 00:14:23.988 fused_ordering(698) 00:14:23.988 fused_ordering(699) 00:14:23.988 fused_ordering(700) 00:14:23.988 fused_ordering(701) 00:14:23.988 fused_ordering(702) 00:14:23.988 fused_ordering(703) 00:14:23.988 fused_ordering(704) 00:14:23.988 fused_ordering(705) 00:14:23.988 fused_ordering(706) 00:14:23.988 fused_ordering(707) 00:14:23.988 fused_ordering(708) 00:14:23.988 fused_ordering(709) 00:14:23.988 fused_ordering(710) 00:14:23.988 fused_ordering(711) 00:14:23.988 fused_ordering(712) 00:14:23.988 fused_ordering(713) 00:14:23.988 fused_ordering(714) 00:14:23.988 fused_ordering(715) 00:14:23.988 fused_ordering(716) 00:14:23.988 fused_ordering(717) 00:14:23.988 fused_ordering(718) 00:14:23.988 fused_ordering(719) 00:14:23.988 fused_ordering(720) 00:14:23.988 fused_ordering(721) 00:14:23.988 fused_ordering(722) 00:14:23.988 fused_ordering(723) 00:14:23.988 fused_ordering(724) 00:14:23.988 fused_ordering(725) 00:14:23.988 fused_ordering(726) 00:14:23.988 fused_ordering(727) 00:14:23.988 fused_ordering(728) 00:14:23.988 fused_ordering(729) 00:14:23.988 fused_ordering(730) 00:14:23.988 fused_ordering(731) 00:14:23.988 fused_ordering(732) 00:14:23.988 fused_ordering(733) 00:14:23.988 fused_ordering(734) 00:14:23.988 fused_ordering(735) 00:14:23.988 fused_ordering(736) 00:14:23.988 fused_ordering(737) 00:14:23.988 fused_ordering(738) 00:14:23.988 fused_ordering(739) 00:14:23.988 fused_ordering(740) 00:14:23.988 fused_ordering(741) 00:14:23.988 fused_ordering(742) 00:14:23.988 fused_ordering(743) 00:14:23.988 fused_ordering(744) 00:14:23.988 fused_ordering(745) 00:14:23.988 fused_ordering(746) 00:14:23.988 fused_ordering(747) 00:14:23.988 fused_ordering(748) 00:14:23.988 fused_ordering(749) 00:14:23.988 fused_ordering(750) 00:14:23.988 fused_ordering(751) 00:14:23.988 fused_ordering(752) 00:14:23.988 fused_ordering(753) 00:14:23.988 fused_ordering(754) 00:14:23.988 fused_ordering(755) 00:14:23.988 fused_ordering(756) 00:14:23.988 fused_ordering(757) 00:14:23.988 fused_ordering(758) 00:14:23.988 fused_ordering(759) 00:14:23.989 fused_ordering(760) 00:14:23.989 fused_ordering(761) 00:14:23.989 fused_ordering(762) 00:14:23.989 fused_ordering(763) 00:14:23.989 fused_ordering(764) 00:14:23.989 fused_ordering(765) 00:14:23.989 fused_ordering(766) 00:14:23.989 fused_ordering(767) 00:14:23.989 fused_ordering(768) 00:14:23.989 fused_ordering(769) 00:14:23.989 fused_ordering(770) 00:14:23.989 fused_ordering(771) 00:14:23.989 fused_ordering(772) 00:14:23.989 fused_ordering(773) 00:14:23.989 fused_ordering(774) 00:14:23.989 fused_ordering(775) 00:14:23.989 fused_ordering(776) 00:14:23.989 fused_ordering(777) 00:14:23.989 fused_ordering(778) 00:14:23.989 fused_ordering(779) 00:14:23.989 fused_ordering(780) 00:14:23.989 fused_ordering(781) 00:14:23.989 fused_ordering(782) 00:14:23.989 fused_ordering(783) 00:14:23.989 fused_ordering(784) 00:14:23.989 fused_ordering(785) 00:14:23.989 fused_ordering(786) 00:14:23.989 fused_ordering(787) 00:14:23.989 fused_ordering(788) 00:14:23.989 fused_ordering(789) 00:14:23.989 fused_ordering(790) 00:14:23.989 fused_ordering(791) 00:14:23.989 fused_ordering(792) 00:14:23.989 fused_ordering(793) 00:14:23.989 fused_ordering(794) 00:14:23.989 fused_ordering(795) 00:14:23.989 fused_ordering(796) 00:14:23.989 fused_ordering(797) 00:14:23.989 fused_ordering(798) 00:14:23.989 fused_ordering(799) 00:14:23.989 fused_ordering(800) 00:14:23.989 fused_ordering(801) 00:14:23.989 fused_ordering(802) 00:14:23.989 fused_ordering(803) 00:14:23.989 fused_ordering(804) 00:14:23.989 fused_ordering(805) 00:14:23.989 fused_ordering(806) 00:14:23.989 fused_ordering(807) 00:14:23.989 fused_ordering(808) 00:14:23.989 fused_ordering(809) 00:14:23.989 fused_ordering(810) 00:14:23.989 fused_ordering(811) 00:14:23.989 fused_ordering(812) 00:14:23.989 fused_ordering(813) 00:14:23.989 fused_ordering(814) 00:14:23.989 fused_ordering(815) 00:14:23.989 fused_ordering(816) 00:14:23.989 fused_ordering(817) 00:14:23.989 fused_ordering(818) 00:14:23.989 fused_ordering(819) 00:14:23.989 fused_ordering(820) 00:14:24.923 fused_ordering(821) 00:14:24.923 fused_ordering(822) 00:14:24.923 fused_ordering(823) 00:14:24.923 fused_ordering(824) 00:14:24.923 fused_ordering(825) 00:14:24.923 fused_ordering(826) 00:14:24.923 fused_ordering(827) 00:14:24.923 fused_ordering(828) 00:14:24.923 fused_ordering(829) 00:14:24.923 fused_ordering(830) 00:14:24.923 fused_ordering(831) 00:14:24.923 fused_ordering(832) 00:14:24.923 fused_ordering(833) 00:14:24.923 fused_ordering(834) 00:14:24.923 fused_ordering(835) 00:14:24.923 fused_ordering(836) 00:14:24.923 fused_ordering(837) 00:14:24.923 fused_ordering(838) 00:14:24.923 fused_ordering(839) 00:14:24.923 fused_ordering(840) 00:14:24.923 fused_ordering(841) 00:14:24.923 fused_ordering(842) 00:14:24.923 fused_ordering(843) 00:14:24.923 fused_ordering(844) 00:14:24.923 fused_ordering(845) 00:14:24.923 fused_ordering(846) 00:14:24.923 fused_ordering(847) 00:14:24.923 fused_ordering(848) 00:14:24.923 fused_ordering(849) 00:14:24.923 fused_ordering(850) 00:14:24.923 fused_ordering(851) 00:14:24.923 fused_ordering(852) 00:14:24.923 fused_ordering(853) 00:14:24.923 fused_ordering(854) 00:14:24.923 fused_ordering(855) 00:14:24.923 fused_ordering(856) 00:14:24.923 fused_ordering(857) 00:14:24.923 fused_ordering(858) 00:14:24.923 fused_ordering(859) 00:14:24.923 fused_ordering(860) 00:14:24.923 fused_ordering(861) 00:14:24.923 fused_ordering(862) 00:14:24.923 fused_ordering(863) 00:14:24.923 fused_ordering(864) 00:14:24.923 fused_ordering(865) 00:14:24.923 fused_ordering(866) 00:14:24.923 fused_ordering(867) 00:14:24.923 fused_ordering(868) 00:14:24.923 fused_ordering(869) 00:14:24.923 fused_ordering(870) 00:14:24.923 fused_ordering(871) 00:14:24.923 fused_ordering(872) 00:14:24.923 fused_ordering(873) 00:14:24.923 fused_ordering(874) 00:14:24.923 fused_ordering(875) 00:14:24.923 fused_ordering(876) 00:14:24.923 fused_ordering(877) 00:14:24.923 fused_ordering(878) 00:14:24.923 fused_ordering(879) 00:14:24.923 fused_ordering(880) 00:14:24.923 fused_ordering(881) 00:14:24.923 fused_ordering(882) 00:14:24.923 fused_ordering(883) 00:14:24.923 fused_ordering(884) 00:14:24.923 fused_ordering(885) 00:14:24.923 fused_ordering(886) 00:14:24.923 fused_ordering(887) 00:14:24.923 fused_ordering(888) 00:14:24.923 fused_ordering(889) 00:14:24.923 fused_ordering(890) 00:14:24.923 fused_ordering(891) 00:14:24.923 fused_ordering(892) 00:14:24.923 fused_ordering(893) 00:14:24.923 fused_ordering(894) 00:14:24.923 fused_ordering(895) 00:14:24.923 fused_ordering(896) 00:14:24.923 fused_ordering(897) 00:14:24.923 fused_ordering(898) 00:14:24.923 fused_ordering(899) 00:14:24.923 fused_ordering(900) 00:14:24.923 fused_ordering(901) 00:14:24.923 fused_ordering(902) 00:14:24.923 fused_ordering(903) 00:14:24.923 fused_ordering(904) 00:14:24.923 fused_ordering(905) 00:14:24.923 fused_ordering(906) 00:14:24.923 fused_ordering(907) 00:14:24.923 fused_ordering(908) 00:14:24.923 fused_ordering(909) 00:14:24.923 fused_ordering(910) 00:14:24.923 fused_ordering(911) 00:14:24.923 fused_ordering(912) 00:14:24.923 fused_ordering(913) 00:14:24.923 fused_ordering(914) 00:14:24.923 fused_ordering(915) 00:14:24.923 fused_ordering(916) 00:14:24.923 fused_ordering(917) 00:14:24.923 fused_ordering(918) 00:14:24.923 fused_ordering(919) 00:14:24.923 fused_ordering(920) 00:14:24.923 fused_ordering(921) 00:14:24.923 fused_ordering(922) 00:14:24.923 fused_ordering(923) 00:14:24.923 fused_ordering(924) 00:14:24.923 fused_ordering(925) 00:14:24.923 fused_ordering(926) 00:14:24.923 fused_ordering(927) 00:14:24.923 fused_ordering(928) 00:14:24.923 fused_ordering(929) 00:14:24.923 fused_ordering(930) 00:14:24.923 fused_ordering(931) 00:14:24.923 fused_ordering(932) 00:14:24.923 fused_ordering(933) 00:14:24.923 fused_ordering(934) 00:14:24.923 fused_ordering(935) 00:14:24.923 fused_ordering(936) 00:14:24.923 fused_ordering(937) 00:14:24.923 fused_ordering(938) 00:14:24.923 fused_ordering(939) 00:14:24.924 fused_ordering(940) 00:14:24.924 fused_ordering(941) 00:14:24.924 fused_ordering(942) 00:14:24.924 fused_ordering(943) 00:14:24.924 fused_ordering(944) 00:14:24.924 fused_ordering(945) 00:14:24.924 fused_ordering(946) 00:14:24.924 fused_ordering(947) 00:14:24.924 fused_ordering(948) 00:14:24.924 fused_ordering(949) 00:14:24.924 fused_ordering(950) 00:14:24.924 fused_ordering(951) 00:14:24.924 fused_ordering(952) 00:14:24.924 fused_ordering(953) 00:14:24.924 fused_ordering(954) 00:14:24.924 fused_ordering(955) 00:14:24.924 fused_ordering(956) 00:14:24.924 fused_ordering(957) 00:14:24.924 fused_ordering(958) 00:14:24.924 fused_ordering(959) 00:14:24.924 fused_ordering(960) 00:14:24.924 fused_ordering(961) 00:14:24.924 fused_ordering(962) 00:14:24.924 fused_ordering(963) 00:14:24.924 fused_ordering(964) 00:14:24.924 fused_ordering(965) 00:14:24.924 fused_ordering(966) 00:14:24.924 fused_ordering(967) 00:14:24.924 fused_ordering(968) 00:14:24.924 fused_ordering(969) 00:14:24.924 fused_ordering(970) 00:14:24.924 fused_ordering(971) 00:14:24.924 fused_ordering(972) 00:14:24.924 fused_ordering(973) 00:14:24.924 fused_ordering(974) 00:14:24.924 fused_ordering(975) 00:14:24.924 fused_ordering(976) 00:14:24.924 fused_ordering(977) 00:14:24.924 fused_ordering(978) 00:14:24.924 fused_ordering(979) 00:14:24.924 fused_ordering(980) 00:14:24.924 fused_ordering(981) 00:14:24.924 fused_ordering(982) 00:14:24.924 fused_ordering(983) 00:14:24.924 fused_ordering(984) 00:14:24.924 fused_ordering(985) 00:14:24.924 fused_ordering(986) 00:14:24.924 fused_ordering(987) 00:14:24.924 fused_ordering(988) 00:14:24.924 fused_ordering(989) 00:14:24.924 fused_ordering(990) 00:14:24.924 fused_ordering(991) 00:14:24.924 fused_ordering(992) 00:14:24.924 fused_ordering(993) 00:14:24.924 fused_ordering(994) 00:14:24.924 fused_ordering(995) 00:14:24.924 fused_ordering(996) 00:14:24.924 fused_ordering(997) 00:14:24.924 fused_ordering(998) 00:14:24.924 fused_ordering(999) 00:14:24.924 fused_ordering(1000) 00:14:24.924 fused_ordering(1001) 00:14:24.924 fused_ordering(1002) 00:14:24.924 fused_ordering(1003) 00:14:24.924 fused_ordering(1004) 00:14:24.924 fused_ordering(1005) 00:14:24.924 fused_ordering(1006) 00:14:24.924 fused_ordering(1007) 00:14:24.924 fused_ordering(1008) 00:14:24.924 fused_ordering(1009) 00:14:24.924 fused_ordering(1010) 00:14:24.924 fused_ordering(1011) 00:14:24.924 fused_ordering(1012) 00:14:24.924 fused_ordering(1013) 00:14:24.924 fused_ordering(1014) 00:14:24.924 fused_ordering(1015) 00:14:24.924 fused_ordering(1016) 00:14:24.924 fused_ordering(1017) 00:14:24.924 fused_ordering(1018) 00:14:24.924 fused_ordering(1019) 00:14:24.924 fused_ordering(1020) 00:14:24.924 fused_ordering(1021) 00:14:24.924 fused_ordering(1022) 00:14:24.924 fused_ordering(1023) 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.924 rmmod nvme_tcp 00:14:24.924 rmmod nvme_fabrics 00:14:24.924 rmmod nvme_keyring 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2087680 ']' 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2087680 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2087680 ']' 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2087680 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2087680 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2087680' 00:14:24.924 killing process with pid 2087680 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2087680 00:14:24.924 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2087680 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.496 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.399 00:14:27.399 real 0m9.031s 00:14:27.399 user 0m6.036s 00:14:27.399 sys 0m4.521s 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:27.399 ************************************ 00:14:27.399 END TEST nvmf_fused_ordering 00:14:27.399 ************************************ 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.399 ************************************ 00:14:27.399 START TEST nvmf_ns_masking 00:14:27.399 ************************************ 00:14:27.399 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:27.399 * Looking for test storage... 00:14:27.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:27.399 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.400 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.400 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.400 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.400 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.659 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.659 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.659 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.659 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.659 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.659 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fc950410-20ea-426c-b1ad-7c91b23128f4 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=adae33fb-d368-48b3-b4bb-14407f8b10ed 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d293be34-8765-4ac3-8aba-7cac5f628ede 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.660 11:22:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:30.206 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:30.206 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:30.206 Found net devices under 0000:84:00.0: cvl_0_0 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:30.206 Found net devices under 0000:84:00.1: cvl_0_1 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.206 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:14:30.207 00:14:30.207 --- 10.0.0.2 ping statistics --- 00:14:30.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.207 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:14:30.207 00:14:30.207 --- 10.0.0.1 ping statistics --- 00:14:30.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.207 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2090173 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2090173 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2090173 ']' 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.207 11:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.207 [2024-07-26 11:22:25.866304] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:14:30.207 [2024-07-26 11:22:25.866418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.468 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.468 [2024-07-26 11:22:25.948834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.468 [2024-07-26 11:22:26.069616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.468 [2024-07-26 11:22:26.069682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.468 [2024-07-26 11:22:26.069708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.468 [2024-07-26 11:22:26.069728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.468 [2024-07-26 11:22:26.069757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.468 [2024-07-26 11:22:26.069798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.726 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:30.983 [2024-07-26 11:22:26.499263] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.983 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:30.984 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:30.984 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.549 Malloc1 00:14:31.549 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:31.807 Malloc2 00:14:31.807 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.066 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:32.631 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.631 [2024-07-26 11:22:28.268709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.631 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:32.631 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d293be34-8765-4ac3-8aba-7cac5f628ede -a 10.0.0.2 -s 4420 -i 4 00:14:32.888 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.888 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:32.888 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.888 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:32.888 11:22:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.452 [ 0]:0x1 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b28d4c6dbd3c46aaaeeae022eedeff26 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b28d4c6dbd3c46aaaeeae022eedeff26 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.452 [ 0]:0x1 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.452 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b28d4c6dbd3c46aaaeeae022eedeff26 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b28d4c6dbd3c46aaaeeae022eedeff26 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.452 [ 1]:0x2 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:35.452 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.709 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.967 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:36.224 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:36.224 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d293be34-8765-4ac3-8aba-7cac5f628ede -a 10.0.0.2 -s 4420 -i 4 00:14:36.481 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:36.481 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:36.481 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.481 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:36.481 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:36.481 11:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:38.419 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:38.419 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:38.419 11:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.419 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:38.419 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.419 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:38.419 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:38.419 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:38.677 [ 0]:0x2 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:38.677 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:38.934 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:38.934 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:38.934 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:38.934 [ 0]:0x1 00:14:38.934 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:38.934 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.192 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b28d4c6dbd3c46aaaeeae022eedeff26 00:14:39.192 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b28d4c6dbd3c46aaaeeae022eedeff26 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.192 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:39.192 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:39.192 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:39.192 [ 1]:0x2 00:14:39.193 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:39.193 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.193 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:39.193 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.193 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.451 11:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:39.451 [ 0]:0x2 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.451 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d293be34-8765-4ac3-8aba-7cac5f628ede -a 10.0.0.2 -s 4420 -i 4 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:40.016 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.543 [ 0]:0x1 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b28d4c6dbd3c46aaaeeae022eedeff26 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b28d4c6dbd3c46aaaeeae022eedeff26 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:42.543 [ 1]:0x2 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.543 11:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.801 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.059 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.059 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:43.060 [ 0]:0x2 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:43.060 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:43.318 [2024-07-26 11:22:38.820879] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:43.318 request: 00:14:43.318 { 00:14:43.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.318 "nsid": 2, 00:14:43.318 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.318 "method": "nvmf_ns_remove_host", 00:14:43.318 "req_id": 1 00:14:43.318 } 00:14:43.318 Got JSON-RPC error response 00:14:43.318 response: 00:14:43.318 { 00:14:43.318 "code": -32602, 00:14:43.318 "message": "Invalid parameters" 00:14:43.318 } 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:43.318 [ 0]:0x2 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.318 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.577 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=44ab0b1650d5469ca13bc228011ba688 00:14:43.577 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 44ab0b1650d5469ca13bc228011ba688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.577 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:43.577 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2091849 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2091849 /var/tmp/host.sock 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2091849 ']' 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:43.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.577 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.577 [2024-07-26 11:22:39.144697] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:14:43.577 [2024-07-26 11:22:39.144784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091849 ] 00:14:43.577 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.577 [2024-07-26 11:22:39.211418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.835 [2024-07-26 11:22:39.332870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.093 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.093 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:44.093 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.351 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:44.916 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fc950410-20ea-426c-b1ad-7c91b23128f4 00:14:44.916 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:44.916 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FC95041020EA426CB1AD7C91B23128F4 -i 00:14:45.482 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid adae33fb-d368-48b3-b4bb-14407f8b10ed 00:14:45.482 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:45.482 11:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g ADAE33FBD36848B3B4BB14407F8B10ED -i 00:14:45.740 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:46.306 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:46.564 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:46.564 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:47.129 nvme0n1 00:14:47.129 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:47.129 11:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:47.694 nvme1n2 00:14:47.695 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:47.695 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:47.695 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:47.695 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:47.695 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:47.951 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:47.951 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:47.951 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:47.951 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:48.207 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fc950410-20ea-426c-b1ad-7c91b23128f4 == \f\c\9\5\0\4\1\0\-\2\0\e\a\-\4\2\6\c\-\b\1\a\d\-\7\c\9\1\b\2\3\1\2\8\f\4 ]] 00:14:48.207 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:48.207 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:48.208 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ adae33fb-d368-48b3-b4bb-14407f8b10ed == \a\d\a\e\3\3\f\b\-\d\3\6\8\-\4\8\b\3\-\b\4\b\b\-\1\4\4\0\7\f\8\b\1\0\e\d ]] 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2091849 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2091849 ']' 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2091849 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2091849 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2091849' 00:14:48.772 killing process with pid 2091849 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2091849 00:14:48.772 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2091849 00:14:49.060 11:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.625 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:49.625 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.626 rmmod nvme_tcp 00:14:49.626 rmmod nvme_fabrics 00:14:49.626 rmmod nvme_keyring 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2090173 ']' 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2090173 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2090173 ']' 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2090173 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.626 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2090173 00:14:49.884 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.884 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.884 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2090173' 00:14:49.884 killing process with pid 2090173 00:14:49.884 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2090173 00:14:49.884 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2090173 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.143 11:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:52.677 00:14:52.677 real 0m24.763s 00:14:52.677 user 0m34.619s 00:14:52.677 sys 0m5.072s 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.677 ************************************ 00:14:52.677 END TEST nvmf_ns_masking 00:14:52.677 ************************************ 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.677 ************************************ 00:14:52.677 START TEST nvmf_nvme_cli 00:14:52.677 ************************************ 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:52.677 * Looking for test storage... 00:14:52.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:52.677 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:55.222 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:55.222 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:55.222 Found net devices under 0000:84:00.0: cvl_0_0 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:55.222 Found net devices under 0000:84:00.1: cvl_0_1 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:55.222 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:55.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:14:55.223 00:14:55.223 --- 10.0.0.2 ping statistics --- 00:14:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.223 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:55.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:14:55.223 00:14:55.223 --- 10.0.0.1 ping statistics --- 00:14:55.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.223 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2094578 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2094578 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2094578 ']' 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.223 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.223 [2024-07-26 11:22:50.529132] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:14:55.223 [2024-07-26 11:22:50.529243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.223 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.223 [2024-07-26 11:22:50.615271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.223 [2024-07-26 11:22:50.743844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.223 [2024-07-26 11:22:50.743912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.223 [2024-07-26 11:22:50.743928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.223 [2024-07-26 11:22:50.743941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.223 [2024-07-26 11:22:50.743953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.223 [2024-07-26 11:22:50.744056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.223 [2024-07-26 11:22:50.744151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.223 [2024-07-26 11:22:50.744265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.223 [2024-07-26 11:22:50.744272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.481 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 [2024-07-26 11:22:50.917218] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 Malloc0 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 Malloc1 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 [2024-07-26 11:22:51.005223] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:55.482 00:14:55.482 Discovery Log Number of Records 2, Generation counter 2 00:14:55.482 =====Discovery Log Entry 0====== 00:14:55.482 trtype: tcp 00:14:55.482 adrfam: ipv4 00:14:55.482 subtype: current discovery subsystem 00:14:55.482 treq: not required 00:14:55.482 portid: 0 00:14:55.482 trsvcid: 4420 00:14:55.482 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:55.482 traddr: 10.0.0.2 00:14:55.482 eflags: explicit discovery connections, duplicate discovery information 00:14:55.482 sectype: none 00:14:55.482 =====Discovery Log Entry 1====== 00:14:55.482 trtype: tcp 00:14:55.482 adrfam: ipv4 00:14:55.482 subtype: nvme subsystem 00:14:55.482 treq: not required 00:14:55.482 portid: 0 00:14:55.482 trsvcid: 4420 00:14:55.482 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:55.482 traddr: 10.0.0.2 00:14:55.482 eflags: none 00:14:55.482 sectype: none 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:55.482 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.047 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:56.047 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.047 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.047 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:56.047 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:56.047 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:58.575 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:58.576 /dev/nvme0n1 ]] 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:58.576 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.834 rmmod nvme_tcp 00:14:58.834 rmmod nvme_fabrics 00:14:58.834 rmmod nvme_keyring 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2094578 ']' 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2094578 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2094578 ']' 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2094578 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094578 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094578' 00:14:58.834 killing process with pid 2094578 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2094578 00:14:58.834 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2094578 00:14:59.092 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.092 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.092 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.093 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.093 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.093 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.093 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.093 11:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:01.624 00:15:01.624 real 0m8.985s 00:15:01.624 user 0m16.297s 00:15:01.624 sys 0m2.632s 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.624 ************************************ 00:15:01.624 END TEST nvmf_nvme_cli 00:15:01.624 ************************************ 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.624 ************************************ 00:15:01.624 START TEST nvmf_vfio_user 00:15:01.624 ************************************ 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:01.624 * Looking for test storage... 00:15:01.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:01.624 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2095497 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2095497' 00:15:01.625 Process pid: 2095497 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2095497 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2095497 ']' 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.625 11:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:01.625 [2024-07-26 11:22:56.985300] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:15:01.625 [2024-07-26 11:22:56.985408] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.625 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.625 [2024-07-26 11:22:57.063306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.625 [2024-07-26 11:22:57.186932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.625 [2024-07-26 11:22:57.187007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.625 [2024-07-26 11:22:57.187023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.625 [2024-07-26 11:22:57.187037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.625 [2024-07-26 11:22:57.187048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.625 [2024-07-26 11:22:57.187109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.625 [2024-07-26 11:22:57.187140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.625 [2024-07-26 11:22:57.187210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.625 [2024-07-26 11:22:57.187214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.882 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.883 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:01.883 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:02.815 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:03.073 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:03.073 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:03.073 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:03.073 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:03.073 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:03.637 Malloc1 00:15:03.637 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:03.637 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:04.568 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:04.825 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.825 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:04.825 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:05.084 Malloc2 00:15:05.084 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:05.648 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:05.906 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:06.501 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:06.501 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:06.501 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.501 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:06.501 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:06.501 11:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:06.501 [2024-07-26 11:23:01.863878] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:15:06.501 [2024-07-26 11:23:01.863925] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096053 ] 00:15:06.501 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.501 [2024-07-26 11:23:01.900972] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:06.501 [2024-07-26 11:23:01.909866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:06.501 [2024-07-26 11:23:01.909900] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faa6bce6000 00:15:06.501 [2024-07-26 11:23:01.910863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.911858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.912864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.913872] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.914876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.915881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.916888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.917897] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.501 [2024-07-26 11:23:01.918900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:06.501 [2024-07-26 11:23:01.918922] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faa6bcdb000 00:15:06.501 [2024-07-26 11:23:01.920199] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:06.501 [2024-07-26 11:23:01.936752] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:06.501 [2024-07-26 11:23:01.936793] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:06.502 [2024-07-26 11:23:01.942067] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:06.502 [2024-07-26 11:23:01.942129] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:06.502 [2024-07-26 11:23:01.942239] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:06.502 [2024-07-26 11:23:01.942274] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:06.502 [2024-07-26 11:23:01.942287] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:06.502 [2024-07-26 11:23:01.943054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:06.502 [2024-07-26 11:23:01.943082] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:06.502 [2024-07-26 11:23:01.943098] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:06.502 [2024-07-26 11:23:01.944059] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:06.502 [2024-07-26 11:23:01.944079] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:06.502 [2024-07-26 11:23:01.944095] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:06.502 [2024-07-26 11:23:01.945065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:06.502 [2024-07-26 11:23:01.945087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:06.502 [2024-07-26 11:23:01.946067] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:06.502 [2024-07-26 11:23:01.946088] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:06.502 [2024-07-26 11:23:01.946098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:06.502 [2024-07-26 11:23:01.946111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:06.502 [2024-07-26 11:23:01.946222] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:06.502 [2024-07-26 11:23:01.946231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:06.502 [2024-07-26 11:23:01.946240] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:06.502 [2024-07-26 11:23:01.947073] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:06.502 [2024-07-26 11:23:01.948077] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:06.502 [2024-07-26 11:23:01.949084] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:06.502 [2024-07-26 11:23:01.950079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.502 [2024-07-26 11:23:01.950181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:06.502 [2024-07-26 11:23:01.951094] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:06.502 [2024-07-26 11:23:01.951114] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:06.502 [2024-07-26 11:23:01.951125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:06.502 [2024-07-26 11:23:01.951172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951199] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.502 [2024-07-26 11:23:01.951210] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.502 [2024-07-26 11:23:01.951217] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.502 [2024-07-26 11:23:01.951236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.502 [2024-07-26 11:23:01.951297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:06.502 [2024-07-26 11:23:01.951315] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:06.502 [2024-07-26 11:23:01.951324] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:06.502 [2024-07-26 11:23:01.951332] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:06.502 [2024-07-26 11:23:01.951341] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:06.502 [2024-07-26 11:23:01.951349] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:06.502 [2024-07-26 11:23:01.951358] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:06.502 [2024-07-26 11:23:01.951366] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:06.502 [2024-07-26 11:23:01.951423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:06.502 [2024-07-26 11:23:01.951456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.502 [2024-07-26 11:23:01.951472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.502 [2024-07-26 11:23:01.951485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.502 [2024-07-26 11:23:01.951498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.502 [2024-07-26 11:23:01.951508] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:06.502 [2024-07-26 11:23:01.951558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:06.502 [2024-07-26 11:23:01.951576] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:06.502 [2024-07-26 11:23:01.951586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:06.502 [2024-07-26 11:23:01.951645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:06.502 [2024-07-26 11:23:01.951720] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951738] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951752] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:06.502 [2024-07-26 11:23:01.951762] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:06.502 [2024-07-26 11:23:01.951768] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.502 [2024-07-26 11:23:01.951779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:06.502 [2024-07-26 11:23:01.951795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:06.502 [2024-07-26 11:23:01.951819] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:06.502 [2024-07-26 11:23:01.951836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951866] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.502 [2024-07-26 11:23:01.951875] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.502 [2024-07-26 11:23:01.951882] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.502 [2024-07-26 11:23:01.951892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.502 [2024-07-26 11:23:01.951921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:06.502 [2024-07-26 11:23:01.951944] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:06.502 [2024-07-26 11:23:01.951974] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.502 [2024-07-26 11:23:01.951983] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.502 [2024-07-26 11:23:01.951990] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.503 [2024-07-26 11:23:01.952000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952065] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952108] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:06.503 [2024-07-26 11:23:01.952116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:06.503 [2024-07-26 11:23:01.952125] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:06.503 [2024-07-26 11:23:01.952154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952297] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:06.503 [2024-07-26 11:23:01.952308] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:06.503 [2024-07-26 11:23:01.952315] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:06.503 [2024-07-26 11:23:01.952322] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:06.503 [2024-07-26 11:23:01.952328] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:06.503 [2024-07-26 11:23:01.952339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:06.503 [2024-07-26 11:23:01.952352] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:06.503 [2024-07-26 11:23:01.952361] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:06.503 [2024-07-26 11:23:01.952371] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.503 [2024-07-26 11:23:01.952382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952394] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:06.503 [2024-07-26 11:23:01.952403] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.503 [2024-07-26 11:23:01.952410] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.503 [2024-07-26 11:23:01.952420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952442] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:06.503 [2024-07-26 11:23:01.952453] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:06.503 [2024-07-26 11:23:01.952459] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.503 [2024-07-26 11:23:01.952469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:06.503 [2024-07-26 11:23:01.952483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:06.503 [2024-07-26 11:23:01.952539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:06.503 ===================================================== 00:15:06.503 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.503 ===================================================== 00:15:06.503 Controller Capabilities/Features 00:15:06.503 ================================ 00:15:06.503 Vendor ID: 4e58 00:15:06.503 Subsystem Vendor ID: 4e58 00:15:06.503 Serial Number: SPDK1 00:15:06.503 Model Number: SPDK bdev Controller 00:15:06.503 Firmware Version: 24.09 00:15:06.503 Recommended Arb Burst: 6 00:15:06.503 IEEE OUI Identifier: 8d 6b 50 00:15:06.503 Multi-path I/O 00:15:06.503 May have multiple subsystem ports: Yes 00:15:06.503 May have multiple controllers: Yes 00:15:06.503 Associated with SR-IOV VF: No 00:15:06.503 Max Data Transfer Size: 131072 00:15:06.503 Max Number of Namespaces: 32 00:15:06.503 Max Number of I/O Queues: 127 00:15:06.503 NVMe Specification Version (VS): 1.3 00:15:06.503 NVMe Specification Version (Identify): 1.3 00:15:06.503 Maximum Queue Entries: 256 00:15:06.503 Contiguous Queues Required: Yes 00:15:06.503 Arbitration Mechanisms Supported 00:15:06.503 Weighted Round Robin: Not Supported 00:15:06.503 Vendor Specific: Not Supported 00:15:06.503 Reset Timeout: 15000 ms 00:15:06.503 Doorbell Stride: 4 bytes 00:15:06.503 NVM Subsystem Reset: Not Supported 00:15:06.503 Command Sets Supported 00:15:06.503 NVM Command Set: Supported 00:15:06.503 Boot Partition: Not Supported 00:15:06.503 Memory Page Size Minimum: 4096 bytes 00:15:06.503 Memory Page Size Maximum: 4096 bytes 00:15:06.503 Persistent Memory Region: Not Supported 00:15:06.503 Optional Asynchronous Events Supported 00:15:06.503 Namespace Attribute Notices: Supported 00:15:06.503 Firmware Activation Notices: Not Supported 00:15:06.503 ANA Change Notices: Not Supported 00:15:06.503 PLE Aggregate Log Change Notices: Not Supported 00:15:06.503 LBA Status Info Alert Notices: Not Supported 00:15:06.503 EGE Aggregate Log Change Notices: Not Supported 00:15:06.503 Normal NVM Subsystem Shutdown event: Not Supported 00:15:06.503 Zone Descriptor Change Notices: Not Supported 00:15:06.503 Discovery Log Change Notices: Not Supported 00:15:06.503 Controller Attributes 00:15:06.503 128-bit Host Identifier: Supported 00:15:06.503 Non-Operational Permissive Mode: Not Supported 00:15:06.503 NVM Sets: Not Supported 00:15:06.503 Read Recovery Levels: Not Supported 00:15:06.503 Endurance Groups: Not Supported 00:15:06.503 Predictable Latency Mode: Not Supported 00:15:06.503 Traffic Based Keep ALive: Not Supported 00:15:06.503 Namespace Granularity: Not Supported 00:15:06.503 SQ Associations: Not Supported 00:15:06.503 UUID List: Not Supported 00:15:06.503 Multi-Domain Subsystem: Not Supported 00:15:06.503 Fixed Capacity Management: Not Supported 00:15:06.503 Variable Capacity Management: Not Supported 00:15:06.503 Delete Endurance Group: Not Supported 00:15:06.503 Delete NVM Set: Not Supported 00:15:06.503 Extended LBA Formats Supported: Not Supported 00:15:06.503 Flexible Data Placement Supported: Not Supported 00:15:06.503 00:15:06.503 Controller Memory Buffer Support 00:15:06.503 ================================ 00:15:06.503 Supported: No 00:15:06.503 00:15:06.503 Persistent Memory Region Support 00:15:06.503 ================================ 00:15:06.503 Supported: No 00:15:06.503 00:15:06.503 Admin Command Set Attributes 00:15:06.503 ============================ 00:15:06.503 Security Send/Receive: Not Supported 00:15:06.503 Format NVM: Not Supported 00:15:06.503 Firmware Activate/Download: Not Supported 00:15:06.503 Namespace Management: Not Supported 00:15:06.503 Device Self-Test: Not Supported 00:15:06.503 Directives: Not Supported 00:15:06.504 NVMe-MI: Not Supported 00:15:06.504 Virtualization Management: Not Supported 00:15:06.504 Doorbell Buffer Config: Not Supported 00:15:06.504 Get LBA Status Capability: Not Supported 00:15:06.504 Command & Feature Lockdown Capability: Not Supported 00:15:06.504 Abort Command Limit: 4 00:15:06.504 Async Event Request Limit: 4 00:15:06.504 Number of Firmware Slots: N/A 00:15:06.504 Firmware Slot 1 Read-Only: N/A 00:15:06.504 Firmware Activation Without Reset: N/A 00:15:06.504 Multiple Update Detection Support: N/A 00:15:06.504 Firmware Update Granularity: No Information Provided 00:15:06.504 Per-Namespace SMART Log: No 00:15:06.504 Asymmetric Namespace Access Log Page: Not Supported 00:15:06.504 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:06.504 Command Effects Log Page: Supported 00:15:06.504 Get Log Page Extended Data: Supported 00:15:06.504 Telemetry Log Pages: Not Supported 00:15:06.504 Persistent Event Log Pages: Not Supported 00:15:06.504 Supported Log Pages Log Page: May Support 00:15:06.504 Commands Supported & Effects Log Page: Not Supported 00:15:06.504 Feature Identifiers & Effects Log Page:May Support 00:15:06.504 NVMe-MI Commands & Effects Log Page: May Support 00:15:06.504 Data Area 4 for Telemetry Log: Not Supported 00:15:06.504 Error Log Page Entries Supported: 128 00:15:06.504 Keep Alive: Supported 00:15:06.504 Keep Alive Granularity: 10000 ms 00:15:06.504 00:15:06.504 NVM Command Set Attributes 00:15:06.504 ========================== 00:15:06.504 Submission Queue Entry Size 00:15:06.504 Max: 64 00:15:06.504 Min: 64 00:15:06.504 Completion Queue Entry Size 00:15:06.504 Max: 16 00:15:06.504 Min: 16 00:15:06.504 Number of Namespaces: 32 00:15:06.504 Compare Command: Supported 00:15:06.504 Write Uncorrectable Command: Not Supported 00:15:06.504 Dataset Management Command: Supported 00:15:06.504 Write Zeroes Command: Supported 00:15:06.504 Set Features Save Field: Not Supported 00:15:06.504 Reservations: Not Supported 00:15:06.504 Timestamp: Not Supported 00:15:06.504 Copy: Supported 00:15:06.504 Volatile Write Cache: Present 00:15:06.504 Atomic Write Unit (Normal): 1 00:15:06.504 Atomic Write Unit (PFail): 1 00:15:06.504 Atomic Compare & Write Unit: 1 00:15:06.504 Fused Compare & Write: Supported 00:15:06.504 Scatter-Gather List 00:15:06.504 SGL Command Set: Supported (Dword aligned) 00:15:06.504 SGL Keyed: Not Supported 00:15:06.504 SGL Bit Bucket Descriptor: Not Supported 00:15:06.504 SGL Metadata Pointer: Not Supported 00:15:06.504 Oversized SGL: Not Supported 00:15:06.504 SGL Metadata Address: Not Supported 00:15:06.504 SGL Offset: Not Supported 00:15:06.504 Transport SGL Data Block: Not Supported 00:15:06.504 Replay Protected Memory Block: Not Supported 00:15:06.504 00:15:06.504 Firmware Slot Information 00:15:06.504 ========================= 00:15:06.504 Active slot: 1 00:15:06.504 Slot 1 Firmware Revision: 24.09 00:15:06.504 00:15:06.504 00:15:06.504 Commands Supported and Effects 00:15:06.504 ============================== 00:15:06.504 Admin Commands 00:15:06.504 -------------- 00:15:06.504 Get Log Page (02h): Supported 00:15:06.504 Identify (06h): Supported 00:15:06.504 Abort (08h): Supported 00:15:06.504 Set Features (09h): Supported 00:15:06.504 Get Features (0Ah): Supported 00:15:06.504 Asynchronous Event Request (0Ch): Supported 00:15:06.504 Keep Alive (18h): Supported 00:15:06.504 I/O Commands 00:15:06.504 ------------ 00:15:06.504 Flush (00h): Supported LBA-Change 00:15:06.504 Write (01h): Supported LBA-Change 00:15:06.504 Read (02h): Supported 00:15:06.504 Compare (05h): Supported 00:15:06.504 Write Zeroes (08h): Supported LBA-Change 00:15:06.504 Dataset Management (09h): Supported LBA-Change 00:15:06.504 Copy (19h): Supported LBA-Change 00:15:06.504 00:15:06.504 Error Log 00:15:06.504 ========= 00:15:06.504 00:15:06.504 Arbitration 00:15:06.504 =========== 00:15:06.504 Arbitration Burst: 1 00:15:06.504 00:15:06.504 Power Management 00:15:06.504 ================ 00:15:06.504 Number of Power States: 1 00:15:06.504 Current Power State: Power State #0 00:15:06.504 Power State #0: 00:15:06.504 Max Power: 0.00 W 00:15:06.504 Non-Operational State: Operational 00:15:06.504 Entry Latency: Not Reported 00:15:06.504 Exit Latency: Not Reported 00:15:06.504 Relative Read Throughput: 0 00:15:06.504 Relative Read Latency: 0 00:15:06.504 Relative Write Throughput: 0 00:15:06.504 Relative Write Latency: 0 00:15:06.504 Idle Power: Not Reported 00:15:06.504 Active Power: Not Reported 00:15:06.504 Non-Operational Permissive Mode: Not Supported 00:15:06.504 00:15:06.504 Health Information 00:15:06.504 ================== 00:15:06.504 Critical Warnings: 00:15:06.504 Available Spare Space: OK 00:15:06.504 Temperature: OK 00:15:06.504 Device Reliability: OK 00:15:06.504 Read Only: No 00:15:06.504 Volatile Memory Backup: OK 00:15:06.504 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:06.504 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:06.504 Available Spare: 0% 00:15:06.504 Available Sp[2024-07-26 11:23:01.952678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:06.504 [2024-07-26 11:23:01.952697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:06.504 [2024-07-26 11:23:01.952744] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:06.504 [2024-07-26 11:23:01.952763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.504 [2024-07-26 11:23:01.952776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.504 [2024-07-26 11:23:01.952787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.504 [2024-07-26 11:23:01.952798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.504 [2024-07-26 11:23:01.953111] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:06.504 [2024-07-26 11:23:01.953135] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:06.504 [2024-07-26 11:23:01.954111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.504 [2024-07-26 11:23:01.954192] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:06.504 [2024-07-26 11:23:01.954208] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:06.504 [2024-07-26 11:23:01.956440] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:06.504 [2024-07-26 11:23:01.956470] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 2 milliseconds 00:15:06.504 [2024-07-26 11:23:01.956534] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:06.504 [2024-07-26 11:23:01.961441] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:06.504 are Threshold: 0% 00:15:06.504 Life Percentage Used: 0% 00:15:06.504 Data Units Read: 0 00:15:06.504 Data Units Written: 0 00:15:06.504 Host Read Commands: 0 00:15:06.504 Host Write Commands: 0 00:15:06.504 Controller Busy Time: 0 minutes 00:15:06.504 Power Cycles: 0 00:15:06.504 Power On Hours: 0 hours 00:15:06.504 Unsafe Shutdowns: 0 00:15:06.504 Unrecoverable Media Errors: 0 00:15:06.504 Lifetime Error Log Entries: 0 00:15:06.504 Warning Temperature Time: 0 minutes 00:15:06.504 Critical Temperature Time: 0 minutes 00:15:06.504 00:15:06.504 Number of Queues 00:15:06.504 ================ 00:15:06.504 Number of I/O Submission Queues: 127 00:15:06.504 Number of I/O Completion Queues: 127 00:15:06.504 00:15:06.504 Active Namespaces 00:15:06.504 ================= 00:15:06.504 Namespace ID:1 00:15:06.504 Error Recovery Timeout: Unlimited 00:15:06.504 Command Set Identifier: NVM (00h) 00:15:06.504 Deallocate: Supported 00:15:06.504 Deallocated/Unwritten Error: Not Supported 00:15:06.504 Deallocated Read Value: Unknown 00:15:06.504 Deallocate in Write Zeroes: Not Supported 00:15:06.504 Deallocated Guard Field: 0xFFFF 00:15:06.504 Flush: Supported 00:15:06.504 Reservation: Supported 00:15:06.504 Namespace Sharing Capabilities: Multiple Controllers 00:15:06.504 Size (in LBAs): 131072 (0GiB) 00:15:06.504 Capacity (in LBAs): 131072 (0GiB) 00:15:06.504 Utilization (in LBAs): 131072 (0GiB) 00:15:06.504 NGUID: 30378F02A7364B598A326725419926CC 00:15:06.505 UUID: 30378f02-a736-4b59-8a32-6725419926cc 00:15:06.505 Thin Provisioning: Not Supported 00:15:06.505 Per-NS Atomic Units: Yes 00:15:06.505 Atomic Boundary Size (Normal): 0 00:15:06.505 Atomic Boundary Size (PFail): 0 00:15:06.505 Atomic Boundary Offset: 0 00:15:06.505 Maximum Single Source Range Length: 65535 00:15:06.505 Maximum Copy Length: 65535 00:15:06.505 Maximum Source Range Count: 1 00:15:06.505 NGUID/EUI64 Never Reused: No 00:15:06.505 Namespace Write Protected: No 00:15:06.505 Number of LBA Formats: 1 00:15:06.505 Current LBA Format: LBA Format #00 00:15:06.505 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:06.505 00:15:06.505 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:06.505 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.763 [2024-07-26 11:23:02.216069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.023 Initializing NVMe Controllers 00:15:12.023 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.023 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:12.023 Initialization complete. Launching workers. 00:15:12.023 ======================================================== 00:15:12.024 Latency(us) 00:15:12.024 Device Information : IOPS MiB/s Average min max 00:15:12.024 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 25622.57 100.09 4995.90 1376.30 8494.19 00:15:12.024 ======================================================== 00:15:12.024 Total : 25622.57 100.09 4995.90 1376.30 8494.19 00:15:12.024 00:15:12.024 [2024-07-26 11:23:07.239858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.024 11:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:12.024 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.024 [2024-07-26 11:23:07.499136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.279 Initializing NVMe Controllers 00:15:17.279 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.279 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:17.279 Initialization complete. Launching workers. 00:15:17.279 ======================================================== 00:15:17.279 Latency(us) 00:15:17.279 Device Information : IOPS MiB/s Average min max 00:15:17.279 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.88 62.66 7979.28 5972.06 15240.81 00:15:17.279 ======================================================== 00:15:17.279 Total : 16039.88 62.66 7979.28 5972.06 15240.81 00:15:17.279 00:15:17.279 [2024-07-26 11:23:12.532088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.279 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:17.279 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.279 [2024-07-26 11:23:12.791388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.540 [2024-07-26 11:23:17.876891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.540 Initializing NVMe Controllers 00:15:22.540 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.540 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:22.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:22.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:22.540 Initialization complete. Launching workers. 00:15:22.540 Starting thread on core 2 00:15:22.540 Starting thread on core 3 00:15:22.540 Starting thread on core 1 00:15:22.540 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:22.540 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.798 [2024-07-26 11:23:18.211929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.079 [2024-07-26 11:23:21.271957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.079 Initializing NVMe Controllers 00:15:26.079 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:26.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:26.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:26.079 Initialization complete. Launching workers. 00:15:26.079 Starting thread on core 1 with urgent priority queue 00:15:26.079 Starting thread on core 2 with urgent priority queue 00:15:26.079 Starting thread on core 3 with urgent priority queue 00:15:26.079 Starting thread on core 0 with urgent priority queue 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 0: 1365.00 IO/s 73.26 secs/100000 ios 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 1: 1576.33 IO/s 63.44 secs/100000 ios 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 2: 1561.00 IO/s 64.06 secs/100000 ios 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 3: 1450.33 IO/s 68.95 secs/100000 ios 00:15:26.079 ======================================================== 00:15:26.079 00:15:26.079 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:26.079 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.079 [2024-07-26 11:23:21.593707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.079 Initializing NVMe Controllers 00:15:26.079 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Namespace ID: 1 size: 0GB 00:15:26.079 Initialization complete. 00:15:26.079 INFO: using host memory buffer for IO 00:15:26.079 Hello world! 00:15:26.079 [2024-07-26 11:23:21.627473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.079 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:26.337 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.594 [2024-07-26 11:23:22.002907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.528 Initializing NVMe Controllers 00:15:27.528 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.528 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.528 Initialization complete. Launching workers. 00:15:27.528 submit (in ns) avg, min, max = 9409.2, 4219.3, 4006179.3 00:15:27.528 complete (in ns) avg, min, max = 26997.9, 2438.5, 4005589.6 00:15:27.528 00:15:27.528 Submit histogram 00:15:27.528 ================ 00:15:27.528 Range in us Cumulative Count 00:15:27.528 4.219 - 4.243: 0.0590% ( 7) 00:15:27.528 4.243 - 4.267: 0.6401% ( 69) 00:15:27.528 4.267 - 4.290: 3.8828% ( 385) 00:15:27.528 4.290 - 4.314: 9.5427% ( 672) 00:15:27.528 4.314 - 4.338: 17.0639% ( 893) 00:15:27.528 4.338 - 4.361: 25.3011% ( 978) 00:15:27.528 4.361 - 4.385: 30.0514% ( 564) 00:15:27.528 4.385 - 4.409: 33.7067% ( 434) 00:15:27.528 4.409 - 4.433: 35.2902% ( 188) 00:15:27.528 4.433 - 4.456: 36.1914% ( 107) 00:15:27.528 4.456 - 4.480: 36.8904% ( 83) 00:15:27.528 4.480 - 4.504: 38.9455% ( 244) 00:15:27.528 4.504 - 4.527: 42.7525% ( 452) 00:15:27.528 4.527 - 4.551: 47.5617% ( 571) 00:15:27.528 4.551 - 4.575: 51.8066% ( 504) 00:15:27.528 4.575 - 4.599: 53.9965% ( 260) 00:15:27.528 4.599 - 4.622: 55.5462% ( 184) 00:15:27.528 4.622 - 4.646: 56.3379% ( 94) 00:15:27.528 4.646 - 4.670: 56.8180% ( 57) 00:15:27.528 4.670 - 4.693: 57.2138% ( 47) 00:15:27.528 4.693 - 4.717: 58.1572% ( 112) 00:15:27.528 4.717 - 4.741: 59.6901% ( 182) 00:15:27.528 4.741 - 4.764: 60.8608% ( 139) 00:15:27.528 4.764 - 4.788: 61.7367% ( 104) 00:15:27.528 4.788 - 4.812: 62.0231% ( 34) 00:15:27.528 4.812 - 4.836: 62.1073% ( 10) 00:15:27.528 4.836 - 4.859: 62.2505% ( 17) 00:15:27.528 4.859 - 4.883: 63.0254% ( 92) 00:15:27.528 4.883 - 4.907: 64.7520% ( 205) 00:15:27.528 4.907 - 4.930: 72.6101% ( 933) 00:15:27.528 4.930 - 4.954: 83.5762% ( 1302) 00:15:27.528 4.954 - 4.978: 92.9504% ( 1113) 00:15:27.528 4.978 - 5.001: 96.1425% ( 379) 00:15:27.528 5.001 - 5.025: 96.7405% ( 71) 00:15:27.528 5.025 - 5.049: 97.0521% ( 37) 00:15:27.528 5.049 - 5.073: 97.2627% ( 25) 00:15:27.528 5.073 - 5.096: 97.3385% ( 9) 00:15:27.528 5.096 - 5.120: 97.4311% ( 11) 00:15:27.528 5.120 - 5.144: 97.4985% ( 8) 00:15:27.528 5.144 - 5.167: 97.5828% ( 10) 00:15:27.528 5.167 - 5.191: 97.6417% ( 7) 00:15:27.528 5.191 - 5.215: 97.7596% ( 14) 00:15:27.528 5.215 - 5.239: 97.8354% ( 9) 00:15:27.528 5.239 - 5.262: 97.9870% ( 18) 00:15:27.528 5.262 - 5.286: 98.0628% ( 9) 00:15:27.528 5.286 - 5.310: 98.1134% ( 6) 00:15:27.528 5.310 - 5.333: 98.1723% ( 7) 00:15:27.528 5.333 - 5.357: 98.1892% ( 2) 00:15:27.528 5.357 - 5.381: 98.2313% ( 5) 00:15:27.528 5.381 - 5.404: 98.2734% ( 5) 00:15:27.528 5.404 - 5.428: 98.2902% ( 2) 00:15:27.528 5.428 - 5.452: 98.3324% ( 5) 00:15:27.528 5.452 - 5.476: 98.3492% ( 2) 00:15:27.528 5.476 - 5.499: 98.3829% ( 4) 00:15:27.528 5.499 - 5.523: 98.4250% ( 5) 00:15:27.528 5.523 - 5.547: 98.4671% ( 5) 00:15:27.528 5.547 - 5.570: 98.4840% ( 2) 00:15:27.528 5.570 - 5.594: 98.5092% ( 3) 00:15:27.528 5.594 - 5.618: 98.5345% ( 3) 00:15:27.528 5.618 - 5.641: 98.5513% ( 2) 00:15:27.528 5.641 - 5.665: 98.5598% ( 1) 00:15:27.528 5.665 - 5.689: 98.5682% ( 1) 00:15:27.528 5.689 - 5.713: 98.5850% ( 2) 00:15:27.528 5.713 - 5.736: 98.6271% ( 5) 00:15:27.528 5.736 - 5.760: 98.6356% ( 1) 00:15:27.528 5.760 - 5.784: 98.6608% ( 3) 00:15:27.528 5.784 - 5.807: 98.6692% ( 1) 00:15:27.528 5.807 - 5.831: 98.6861% ( 2) 00:15:27.528 5.831 - 5.855: 98.7198% ( 4) 00:15:27.528 5.855 - 5.879: 98.7451% ( 3) 00:15:27.528 5.879 - 5.902: 98.7619% ( 2) 00:15:27.528 5.902 - 5.926: 98.7956% ( 4) 00:15:27.528 5.926 - 5.950: 98.8209% ( 3) 00:15:27.528 5.950 - 5.973: 98.8377% ( 2) 00:15:27.528 6.068 - 6.116: 98.8545% ( 2) 00:15:27.528 6.163 - 6.210: 98.8630% ( 1) 00:15:27.528 6.210 - 6.258: 98.8798% ( 2) 00:15:27.528 6.258 - 6.305: 98.9051% ( 3) 00:15:27.528 6.353 - 6.400: 98.9135% ( 1) 00:15:27.528 6.400 - 6.447: 98.9303% ( 2) 00:15:27.528 6.447 - 6.495: 98.9472% ( 2) 00:15:27.528 6.495 - 6.542: 98.9640% ( 2) 00:15:27.528 6.542 - 6.590: 98.9725% ( 1) 00:15:27.528 6.637 - 6.684: 98.9809% ( 1) 00:15:27.528 6.827 - 6.874: 98.9977% ( 2) 00:15:27.528 7.016 - 7.064: 99.0061% ( 1) 00:15:27.528 7.064 - 7.111: 99.0146% ( 1) 00:15:27.528 7.111 - 7.159: 99.0230% ( 1) 00:15:27.528 7.159 - 7.206: 99.0314% ( 1) 00:15:27.528 7.206 - 7.253: 99.0398% ( 1) 00:15:27.528 7.680 - 7.727: 99.0483% ( 1) 00:15:27.528 7.775 - 7.822: 99.0567% ( 1) 00:15:27.528 7.822 - 7.870: 99.0651% ( 1) 00:15:27.528 7.870 - 7.917: 99.0735% ( 1) 00:15:27.528 7.964 - 8.012: 99.0904% ( 2) 00:15:27.528 8.012 - 8.059: 99.0988% ( 1) 00:15:27.528 8.201 - 8.249: 99.1156% ( 2) 00:15:27.528 8.249 - 8.296: 99.1241% ( 1) 00:15:27.528 8.439 - 8.486: 99.1325% ( 1) 00:15:27.528 8.486 - 8.533: 99.1409% ( 1) 00:15:27.528 8.533 - 8.581: 99.1493% ( 1) 00:15:27.528 8.581 - 8.628: 99.1578% ( 1) 00:15:27.528 8.628 - 8.676: 99.1662% ( 1) 00:15:27.528 9.007 - 9.055: 99.1746% ( 1) 00:15:27.528 9.055 - 9.102: 99.2083% ( 4) 00:15:27.528 9.150 - 9.197: 99.2167% ( 1) 00:15:27.528 9.197 - 9.244: 99.2251% ( 1) 00:15:27.528 9.292 - 9.339: 99.2420% ( 2) 00:15:27.528 9.339 - 9.387: 99.2672% ( 3) 00:15:27.528 9.434 - 9.481: 99.2757% ( 1) 00:15:27.528 9.481 - 9.529: 99.2841% ( 1) 00:15:27.528 9.624 - 9.671: 99.3094% ( 3) 00:15:27.528 9.671 - 9.719: 99.3346% ( 3) 00:15:27.528 9.766 - 9.813: 99.3430% ( 1) 00:15:27.528 9.813 - 9.861: 99.3599% ( 2) 00:15:27.528 9.861 - 9.908: 99.3767% ( 2) 00:15:27.528 9.908 - 9.956: 99.3936% ( 2) 00:15:27.528 9.956 - 10.003: 99.4104% ( 2) 00:15:27.528 10.098 - 10.145: 99.4188% ( 1) 00:15:27.528 10.145 - 10.193: 99.4273% ( 1) 00:15:27.528 10.240 - 10.287: 99.4357% ( 1) 00:15:27.528 10.430 - 10.477: 99.4441% ( 1) 00:15:27.528 10.524 - 10.572: 99.4525% ( 1) 00:15:27.528 10.572 - 10.619: 99.4610% ( 1) 00:15:27.528 10.619 - 10.667: 99.4778% ( 2) 00:15:27.528 10.667 - 10.714: 99.4862% ( 1) 00:15:27.528 10.714 - 10.761: 99.4947% ( 1) 00:15:27.528 10.761 - 10.809: 99.5199% ( 3) 00:15:27.528 10.809 - 10.856: 99.5283% ( 1) 00:15:27.528 10.999 - 11.046: 99.5368% ( 1) 00:15:27.528 11.046 - 11.093: 99.5452% ( 1) 00:15:27.528 11.093 - 11.141: 99.5536% ( 1) 00:15:27.528 11.283 - 11.330: 99.5620% ( 1) 00:15:27.528 11.378 - 11.425: 99.5705% ( 1) 00:15:27.528 11.520 - 11.567: 99.5789% ( 1) 00:15:27.528 11.567 - 11.615: 99.5873% ( 1) 00:15:27.528 11.615 - 11.662: 99.5957% ( 1) 00:15:27.528 11.662 - 11.710: 99.6041% ( 1) 00:15:27.528 11.710 - 11.757: 99.6126% ( 1) 00:15:27.528 11.804 - 11.852: 99.6210% ( 1) 00:15:27.528 11.852 - 11.899: 99.6378% ( 2) 00:15:27.528 11.994 - 12.041: 99.6463% ( 1) 00:15:27.528 12.089 - 12.136: 99.6547% ( 1) 00:15:27.528 12.136 - 12.231: 99.6631% ( 1) 00:15:27.528 12.231 - 12.326: 99.6715% ( 1) 00:15:27.528 12.421 - 12.516: 99.6799% ( 1) 00:15:27.528 12.516 - 12.610: 99.6884% ( 1) 00:15:27.528 12.990 - 13.084: 99.6968% ( 1) 00:15:27.528 13.179 - 13.274: 99.7221% ( 3) 00:15:27.528 13.369 - 13.464: 99.7389% ( 2) 00:15:27.528 13.464 - 13.559: 99.7473% ( 1) 00:15:27.528 13.559 - 13.653: 99.7557% ( 1) 00:15:27.528 13.653 - 13.748: 99.7726% ( 2) 00:15:27.528 13.843 - 13.938: 99.7810% ( 1) 00:15:27.528 14.127 - 14.222: 99.7894% ( 1) 00:15:27.528 14.222 - 14.317: 99.7979% ( 1) 00:15:27.528 14.412 - 14.507: 99.8063% ( 1) 00:15:27.528 14.791 - 14.886: 99.8147% ( 1) 00:15:27.528 14.981 - 15.076: 99.8316% ( 2) 00:15:27.528 15.076 - 15.170: 99.8400% ( 1) 00:15:27.528 15.265 - 15.360: 99.8484% ( 1) 00:15:27.528 15.360 - 15.455: 99.8652% ( 2) 00:15:27.528 15.455 - 15.550: 99.8737% ( 1) 00:15:27.528 15.644 - 15.739: 99.8821% ( 1) 00:15:27.528 3980.705 - 4004.978: 99.9916% ( 13) 00:15:27.528 4004.978 - 4029.250: 100.0000% ( 1) 00:15:27.528 00:15:27.528 Complet[2024-07-26 11:23:23.025701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.528 e histogram 00:15:27.528 ================== 00:15:27.529 Range in us Cumulative Count 00:15:27.529 2.430 - 2.441: 0.0168% ( 2) 00:15:27.529 2.441 - 2.453: 3.8575% ( 456) 00:15:27.529 2.453 - 2.465: 43.6874% ( 4729) 00:15:27.529 2.465 - 2.477: 66.3607% ( 2692) 00:15:27.529 2.477 - 2.489: 69.4349% ( 365) 00:15:27.529 2.489 - 2.501: 80.6536% ( 1332) 00:15:27.529 2.501 - 2.513: 90.7353% ( 1197) 00:15:27.529 2.513 - 2.524: 93.7421% ( 357) 00:15:27.529 2.524 - 2.536: 96.4878% ( 326) 00:15:27.529 2.536 - 2.548: 97.8607% ( 163) 00:15:27.529 2.548 - 2.560: 98.1976% ( 40) 00:15:27.529 2.560 - 2.572: 98.3913% ( 23) 00:15:27.529 2.572 - 2.584: 98.5092% ( 14) 00:15:27.529 2.584 - 2.596: 98.6019% ( 11) 00:15:27.529 2.596 - 2.607: 98.6692% ( 8) 00:15:27.529 2.607 - 2.619: 98.7198% ( 6) 00:15:27.529 2.619 - 2.631: 98.7619% ( 5) 00:15:27.529 2.631 - 2.643: 98.7787% ( 2) 00:15:27.529 2.655 - 2.667: 98.8040% ( 3) 00:15:27.529 2.679 - 2.690: 98.8124% ( 1) 00:15:27.529 2.714 - 2.726: 98.8209% ( 1) 00:15:27.529 2.726 - 2.738: 98.8377% ( 2) 00:15:27.529 2.738 - 2.750: 98.8461% ( 1) 00:15:27.529 2.773 - 2.785: 98.8545% ( 1) 00:15:27.529 2.785 - 2.797: 98.8714% ( 2) 00:15:27.529 2.797 - 2.809: 98.8798% ( 1) 00:15:27.529 2.892 - 2.904: 98.8882% ( 1) 00:15:27.529 3.034 - 3.058: 98.8967% ( 1) 00:15:27.529 3.153 - 3.176: 98.9135% ( 2) 00:15:27.529 3.200 - 3.224: 98.9219% ( 1) 00:15:27.529 3.224 - 3.247: 98.9388% ( 2) 00:15:27.529 3.247 - 3.271: 98.9640% ( 3) 00:15:27.529 3.271 - 3.295: 98.9893% ( 3) 00:15:27.529 3.295 - 3.319: 99.0061% ( 2) 00:15:27.529 3.319 - 3.342: 99.0230% ( 2) 00:15:27.529 3.342 - 3.366: 99.0483% ( 3) 00:15:27.529 3.366 - 3.390: 99.0567% ( 1) 00:15:27.529 3.390 - 3.413: 99.0820% ( 3) 00:15:27.529 3.413 - 3.437: 99.1156% ( 4) 00:15:27.529 3.461 - 3.484: 99.1241% ( 1) 00:15:27.529 3.484 - 3.508: 99.1409% ( 2) 00:15:27.529 3.508 - 3.532: 99.1662% ( 3) 00:15:27.529 3.532 - 3.556: 99.1746% ( 1) 00:15:27.529 3.627 - 3.650: 99.1830% ( 1) 00:15:27.529 3.721 - 3.745: 99.1914% ( 1) 00:15:27.529 3.840 - 3.864: 99.1999% ( 1) 00:15:27.529 3.982 - 4.006: 99.2083% ( 1) 00:15:27.529 4.338 - 4.361: 99.2167% ( 1) 00:15:27.529 4.361 - 4.385: 99.2251% ( 1) 00:15:27.529 4.385 - 4.409: 99.2336% ( 1) 00:15:27.529 4.480 - 4.504: 99.2420% ( 1) 00:15:27.529 4.646 - 4.670: 99.2504% ( 1) 00:15:27.529 5.689 - 5.713: 99.2588% ( 1) 00:15:27.529 6.044 - 6.068: 99.2672% ( 1) 00:15:27.529 6.447 - 6.495: 99.2757% ( 1) 00:15:27.529 6.779 - 6.827: 99.2841% ( 1) 00:15:27.529 7.016 - 7.064: 99.2925% ( 1) 00:15:27.529 7.253 - 7.301: 99.3009% ( 1) 00:15:27.529 7.680 - 7.727: 99.3094% ( 1) 00:15:27.529 8.059 - 8.107: 99.3262% ( 2) 00:15:27.529 8.107 - 8.154: 99.3346% ( 1) 00:15:27.529 8.723 - 8.770: 99.3430% ( 1) 00:15:27.529 8.913 - 8.960: 99.3515% ( 1) 00:15:27.529 9.150 - 9.197: 99.3599% ( 1) 00:15:27.529 9.671 - 9.719: 99.3683% ( 1) 00:15:27.529 9.908 - 9.956: 99.3767% ( 1) 00:15:27.529 125.156 - 125.914: 99.3852% ( 1) 00:15:27.529 3398.163 - 3422.436: 99.3936% ( 1) 00:15:27.529 3616.616 - 3640.889: 99.4020% ( 1) 00:15:27.529 3980.705 - 4004.978: 99.9916% ( 70) 00:15:27.529 4004.978 - 4029.250: 100.0000% ( 1) 00:15:27.529 00:15:27.529 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:27.529 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:27.529 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:27.529 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:27.529 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:27.787 [ 00:15:27.787 { 00:15:27.787 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.787 "subtype": "Discovery", 00:15:27.787 "listen_addresses": [], 00:15:27.787 "allow_any_host": true, 00:15:27.787 "hosts": [] 00:15:27.787 }, 00:15:27.787 { 00:15:27.787 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:27.787 "subtype": "NVMe", 00:15:27.787 "listen_addresses": [ 00:15:27.787 { 00:15:27.787 "trtype": "VFIOUSER", 00:15:27.787 "adrfam": "IPv4", 00:15:27.787 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:27.787 "trsvcid": "0" 00:15:27.787 } 00:15:27.787 ], 00:15:27.787 "allow_any_host": true, 00:15:27.787 "hosts": [], 00:15:27.787 "serial_number": "SPDK1", 00:15:27.787 "model_number": "SPDK bdev Controller", 00:15:27.787 "max_namespaces": 32, 00:15:27.787 "min_cntlid": 1, 00:15:27.787 "max_cntlid": 65519, 00:15:27.787 "namespaces": [ 00:15:27.787 { 00:15:27.787 "nsid": 1, 00:15:27.787 "bdev_name": "Malloc1", 00:15:27.787 "name": "Malloc1", 00:15:27.787 "nguid": "30378F02A7364B598A326725419926CC", 00:15:27.787 "uuid": "30378f02-a736-4b59-8a32-6725419926cc" 00:15:27.787 } 00:15:27.787 ] 00:15:27.787 }, 00:15:27.787 { 00:15:27.787 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:27.787 "subtype": "NVMe", 00:15:27.787 "listen_addresses": [ 00:15:27.787 { 00:15:27.787 "trtype": "VFIOUSER", 00:15:27.787 "adrfam": "IPv4", 00:15:27.787 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:27.787 "trsvcid": "0" 00:15:27.787 } 00:15:27.787 ], 00:15:27.787 "allow_any_host": true, 00:15:27.787 "hosts": [], 00:15:27.787 "serial_number": "SPDK2", 00:15:27.787 "model_number": "SPDK bdev Controller", 00:15:27.787 "max_namespaces": 32, 00:15:27.787 "min_cntlid": 1, 00:15:27.787 "max_cntlid": 65519, 00:15:27.787 "namespaces": [ 00:15:27.787 { 00:15:27.787 "nsid": 1, 00:15:27.787 "bdev_name": "Malloc2", 00:15:27.787 "name": "Malloc2", 00:15:27.787 "nguid": "A91277010BAA431280B5CF4DE97CC7FA", 00:15:27.787 "uuid": "a9127701-0baa-4312-80b5-cf4de97cc7fa" 00:15:27.787 } 00:15:27.787 ] 00:15:27.787 } 00:15:27.787 ] 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2098574 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:27.787 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:28.044 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.044 [2024-07-26 11:23:23.643988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.608 Malloc3 00:15:28.608 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:28.866 [2024-07-26 11:23:24.388835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.866 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.866 Asynchronous Event Request test 00:15:28.866 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.866 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.866 Registering asynchronous event callbacks... 00:15:28.866 Starting namespace attribute notice tests for all controllers... 00:15:28.866 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:28.866 aer_cb - Changed Namespace 00:15:28.866 Cleaning up... 00:15:29.123 [ 00:15:29.123 { 00:15:29.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.123 "subtype": "Discovery", 00:15:29.123 "listen_addresses": [], 00:15:29.123 "allow_any_host": true, 00:15:29.123 "hosts": [] 00:15:29.123 }, 00:15:29.123 { 00:15:29.123 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.123 "subtype": "NVMe", 00:15:29.123 "listen_addresses": [ 00:15:29.124 { 00:15:29.124 "trtype": "VFIOUSER", 00:15:29.124 "adrfam": "IPv4", 00:15:29.124 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.124 "trsvcid": "0" 00:15:29.124 } 00:15:29.124 ], 00:15:29.124 "allow_any_host": true, 00:15:29.124 "hosts": [], 00:15:29.124 "serial_number": "SPDK1", 00:15:29.124 "model_number": "SPDK bdev Controller", 00:15:29.124 "max_namespaces": 32, 00:15:29.124 "min_cntlid": 1, 00:15:29.124 "max_cntlid": 65519, 00:15:29.124 "namespaces": [ 00:15:29.124 { 00:15:29.124 "nsid": 1, 00:15:29.124 "bdev_name": "Malloc1", 00:15:29.124 "name": "Malloc1", 00:15:29.124 "nguid": "30378F02A7364B598A326725419926CC", 00:15:29.124 "uuid": "30378f02-a736-4b59-8a32-6725419926cc" 00:15:29.124 }, 00:15:29.124 { 00:15:29.124 "nsid": 2, 00:15:29.124 "bdev_name": "Malloc3", 00:15:29.124 "name": "Malloc3", 00:15:29.124 "nguid": "995596FC78474800A990232D9FC207E2", 00:15:29.124 "uuid": "995596fc-7847-4800-a990-232d9fc207e2" 00:15:29.124 } 00:15:29.124 ] 00:15:29.124 }, 00:15:29.124 { 00:15:29.124 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.124 "subtype": "NVMe", 00:15:29.124 "listen_addresses": [ 00:15:29.124 { 00:15:29.124 "trtype": "VFIOUSER", 00:15:29.124 "adrfam": "IPv4", 00:15:29.124 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.124 "trsvcid": "0" 00:15:29.124 } 00:15:29.124 ], 00:15:29.124 "allow_any_host": true, 00:15:29.124 "hosts": [], 00:15:29.124 "serial_number": "SPDK2", 00:15:29.124 "model_number": "SPDK bdev Controller", 00:15:29.124 "max_namespaces": 32, 00:15:29.124 "min_cntlid": 1, 00:15:29.124 "max_cntlid": 65519, 00:15:29.124 "namespaces": [ 00:15:29.124 { 00:15:29.124 "nsid": 1, 00:15:29.124 "bdev_name": "Malloc2", 00:15:29.124 "name": "Malloc2", 00:15:29.124 "nguid": "A91277010BAA431280B5CF4DE97CC7FA", 00:15:29.124 "uuid": "a9127701-0baa-4312-80b5-cf4de97cc7fa" 00:15:29.124 } 00:15:29.124 ] 00:15:29.124 } 00:15:29.124 ] 00:15:29.124 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2098574 00:15:29.124 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.124 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:29.124 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:29.124 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:29.124 [2024-07-26 11:23:24.700596] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:15:29.124 [2024-07-26 11:23:24.700696] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098714 ] 00:15:29.124 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.124 [2024-07-26 11:23:24.744871] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:29.124 [2024-07-26 11:23:24.750254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.124 [2024-07-26 11:23:24.750289] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fda3d193000 00:15:29.124 [2024-07-26 11:23:24.751259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.752269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.753283] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.754290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.755299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.756308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.757320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.758332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.124 [2024-07-26 11:23:24.759346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.124 [2024-07-26 11:23:24.759371] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fda3d188000 00:15:29.124 [2024-07-26 11:23:24.760645] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.124 [2024-07-26 11:23:24.776410] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:29.124 [2024-07-26 11:23:24.776457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:29.124 [2024-07-26 11:23:24.781582] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:29.124 [2024-07-26 11:23:24.781643] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:29.124 [2024-07-26 11:23:24.781744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:29.124 [2024-07-26 11:23:24.781771] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:29.124 [2024-07-26 11:23:24.781783] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:29.124 [2024-07-26 11:23:24.782594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:29.124 [2024-07-26 11:23:24.782629] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:29.124 [2024-07-26 11:23:24.782646] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:29.124 [2024-07-26 11:23:24.783600] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:29.124 [2024-07-26 11:23:24.783623] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:29.124 [2024-07-26 11:23:24.783639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:29.124 [2024-07-26 11:23:24.784610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:29.124 [2024-07-26 11:23:24.784634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:29.399 [2024-07-26 11:23:24.785619] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:29.399 [2024-07-26 11:23:24.785642] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:29.399 [2024-07-26 11:23:24.785653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:29.399 [2024-07-26 11:23:24.785666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:29.399 [2024-07-26 11:23:24.785777] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:29.399 [2024-07-26 11:23:24.785786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:29.399 [2024-07-26 11:23:24.785795] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:29.399 [2024-07-26 11:23:24.786634] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:29.399 [2024-07-26 11:23:24.787636] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:29.399 [2024-07-26 11:23:24.788646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:29.399 [2024-07-26 11:23:24.789638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.399 [2024-07-26 11:23:24.789715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:29.399 [2024-07-26 11:23:24.790656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:29.399 [2024-07-26 11:23:24.790679] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:29.399 [2024-07-26 11:23:24.790689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:29.399 [2024-07-26 11:23:24.790717] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:29.399 [2024-07-26 11:23:24.790737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:29.399 [2024-07-26 11:23:24.790765] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.399 [2024-07-26 11:23:24.790777] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.399 [2024-07-26 11:23:24.790784] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.399 [2024-07-26 11:23:24.790804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.399 [2024-07-26 11:23:24.799447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:29.399 [2024-07-26 11:23:24.799472] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:29.399 [2024-07-26 11:23:24.799482] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:29.399 [2024-07-26 11:23:24.799491] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:29.400 [2024-07-26 11:23:24.799499] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:29.400 [2024-07-26 11:23:24.799508] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:29.400 [2024-07-26 11:23:24.799517] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:29.400 [2024-07-26 11:23:24.799526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.799540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.799562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.807439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.807473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.400 [2024-07-26 11:23:24.807489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.400 [2024-07-26 11:23:24.807503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.400 [2024-07-26 11:23:24.807517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.400 [2024-07-26 11:23:24.807527] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.807545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.807563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.815438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.815458] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:29.400 [2024-07-26 11:23:24.815469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.815487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.815503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.815520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.823440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.823526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.823545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.823561] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:29.400 [2024-07-26 11:23:24.823571] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:29.400 [2024-07-26 11:23:24.823578] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.400 [2024-07-26 11:23:24.823589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.831439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.831465] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:29.400 [2024-07-26 11:23:24.831484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.831500] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.831515] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.400 [2024-07-26 11:23:24.831525] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.400 [2024-07-26 11:23:24.831532] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.400 [2024-07-26 11:23:24.831542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.839472] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.839490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.839506] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.400 [2024-07-26 11:23:24.839516] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.400 [2024-07-26 11:23:24.839523] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.400 [2024-07-26 11:23:24.839534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.847443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.847467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847548] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:29.400 [2024-07-26 11:23:24.847557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:29.400 [2024-07-26 11:23:24.847566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:29.400 [2024-07-26 11:23:24.847594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.855444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.855475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.863441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.863470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.871444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.871473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.879439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.879475] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:29.400 [2024-07-26 11:23:24.879488] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:29.400 [2024-07-26 11:23:24.879495] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:29.400 [2024-07-26 11:23:24.879502] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:29.400 [2024-07-26 11:23:24.879509] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:29.400 [2024-07-26 11:23:24.879519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:29.400 [2024-07-26 11:23:24.879533] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:29.400 [2024-07-26 11:23:24.879543] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:29.400 [2024-07-26 11:23:24.879550] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.400 [2024-07-26 11:23:24.879560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.879572] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:29.400 [2024-07-26 11:23:24.879587] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.400 [2024-07-26 11:23:24.879595] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.400 [2024-07-26 11:23:24.879605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.879619] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:29.400 [2024-07-26 11:23:24.879629] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:29.400 [2024-07-26 11:23:24.879636] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.400 [2024-07-26 11:23:24.879646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:29.400 [2024-07-26 11:23:24.887440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:29.400 [2024-07-26 11:23:24.887471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:29.401 [2024-07-26 11:23:24.887491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:29.401 [2024-07-26 11:23:24.887505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:29.401 ===================================================== 00:15:29.401 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.401 ===================================================== 00:15:29.401 Controller Capabilities/Features 00:15:29.401 ================================ 00:15:29.401 Vendor ID: 4e58 00:15:29.401 Subsystem Vendor ID: 4e58 00:15:29.401 Serial Number: SPDK2 00:15:29.401 Model Number: SPDK bdev Controller 00:15:29.401 Firmware Version: 24.09 00:15:29.401 Recommended Arb Burst: 6 00:15:29.401 IEEE OUI Identifier: 8d 6b 50 00:15:29.401 Multi-path I/O 00:15:29.401 May have multiple subsystem ports: Yes 00:15:29.401 May have multiple controllers: Yes 00:15:29.401 Associated with SR-IOV VF: No 00:15:29.401 Max Data Transfer Size: 131072 00:15:29.401 Max Number of Namespaces: 32 00:15:29.401 Max Number of I/O Queues: 127 00:15:29.401 NVMe Specification Version (VS): 1.3 00:15:29.401 NVMe Specification Version (Identify): 1.3 00:15:29.401 Maximum Queue Entries: 256 00:15:29.401 Contiguous Queues Required: Yes 00:15:29.401 Arbitration Mechanisms Supported 00:15:29.401 Weighted Round Robin: Not Supported 00:15:29.401 Vendor Specific: Not Supported 00:15:29.401 Reset Timeout: 15000 ms 00:15:29.401 Doorbell Stride: 4 bytes 00:15:29.401 NVM Subsystem Reset: Not Supported 00:15:29.401 Command Sets Supported 00:15:29.401 NVM Command Set: Supported 00:15:29.401 Boot Partition: Not Supported 00:15:29.401 Memory Page Size Minimum: 4096 bytes 00:15:29.401 Memory Page Size Maximum: 4096 bytes 00:15:29.401 Persistent Memory Region: Not Supported 00:15:29.401 Optional Asynchronous Events Supported 00:15:29.401 Namespace Attribute Notices: Supported 00:15:29.401 Firmware Activation Notices: Not Supported 00:15:29.401 ANA Change Notices: Not Supported 00:15:29.401 PLE Aggregate Log Change Notices: Not Supported 00:15:29.401 LBA Status Info Alert Notices: Not Supported 00:15:29.401 EGE Aggregate Log Change Notices: Not Supported 00:15:29.401 Normal NVM Subsystem Shutdown event: Not Supported 00:15:29.401 Zone Descriptor Change Notices: Not Supported 00:15:29.401 Discovery Log Change Notices: Not Supported 00:15:29.401 Controller Attributes 00:15:29.401 128-bit Host Identifier: Supported 00:15:29.401 Non-Operational Permissive Mode: Not Supported 00:15:29.401 NVM Sets: Not Supported 00:15:29.401 Read Recovery Levels: Not Supported 00:15:29.401 Endurance Groups: Not Supported 00:15:29.401 Predictable Latency Mode: Not Supported 00:15:29.401 Traffic Based Keep ALive: Not Supported 00:15:29.401 Namespace Granularity: Not Supported 00:15:29.401 SQ Associations: Not Supported 00:15:29.401 UUID List: Not Supported 00:15:29.401 Multi-Domain Subsystem: Not Supported 00:15:29.401 Fixed Capacity Management: Not Supported 00:15:29.401 Variable Capacity Management: Not Supported 00:15:29.401 Delete Endurance Group: Not Supported 00:15:29.401 Delete NVM Set: Not Supported 00:15:29.401 Extended LBA Formats Supported: Not Supported 00:15:29.401 Flexible Data Placement Supported: Not Supported 00:15:29.401 00:15:29.401 Controller Memory Buffer Support 00:15:29.401 ================================ 00:15:29.401 Supported: No 00:15:29.401 00:15:29.401 Persistent Memory Region Support 00:15:29.401 ================================ 00:15:29.401 Supported: No 00:15:29.401 00:15:29.401 Admin Command Set Attributes 00:15:29.401 ============================ 00:15:29.401 Security Send/Receive: Not Supported 00:15:29.401 Format NVM: Not Supported 00:15:29.401 Firmware Activate/Download: Not Supported 00:15:29.401 Namespace Management: Not Supported 00:15:29.401 Device Self-Test: Not Supported 00:15:29.401 Directives: Not Supported 00:15:29.401 NVMe-MI: Not Supported 00:15:29.401 Virtualization Management: Not Supported 00:15:29.401 Doorbell Buffer Config: Not Supported 00:15:29.401 Get LBA Status Capability: Not Supported 00:15:29.401 Command & Feature Lockdown Capability: Not Supported 00:15:29.401 Abort Command Limit: 4 00:15:29.401 Async Event Request Limit: 4 00:15:29.401 Number of Firmware Slots: N/A 00:15:29.401 Firmware Slot 1 Read-Only: N/A 00:15:29.401 Firmware Activation Without Reset: N/A 00:15:29.401 Multiple Update Detection Support: N/A 00:15:29.401 Firmware Update Granularity: No Information Provided 00:15:29.401 Per-Namespace SMART Log: No 00:15:29.401 Asymmetric Namespace Access Log Page: Not Supported 00:15:29.401 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:29.401 Command Effects Log Page: Supported 00:15:29.401 Get Log Page Extended Data: Supported 00:15:29.401 Telemetry Log Pages: Not Supported 00:15:29.401 Persistent Event Log Pages: Not Supported 00:15:29.401 Supported Log Pages Log Page: May Support 00:15:29.401 Commands Supported & Effects Log Page: Not Supported 00:15:29.401 Feature Identifiers & Effects Log Page:May Support 00:15:29.401 NVMe-MI Commands & Effects Log Page: May Support 00:15:29.401 Data Area 4 for Telemetry Log: Not Supported 00:15:29.401 Error Log Page Entries Supported: 128 00:15:29.401 Keep Alive: Supported 00:15:29.401 Keep Alive Granularity: 10000 ms 00:15:29.401 00:15:29.401 NVM Command Set Attributes 00:15:29.401 ========================== 00:15:29.401 Submission Queue Entry Size 00:15:29.401 Max: 64 00:15:29.401 Min: 64 00:15:29.401 Completion Queue Entry Size 00:15:29.401 Max: 16 00:15:29.401 Min: 16 00:15:29.401 Number of Namespaces: 32 00:15:29.401 Compare Command: Supported 00:15:29.401 Write Uncorrectable Command: Not Supported 00:15:29.401 Dataset Management Command: Supported 00:15:29.401 Write Zeroes Command: Supported 00:15:29.401 Set Features Save Field: Not Supported 00:15:29.401 Reservations: Not Supported 00:15:29.401 Timestamp: Not Supported 00:15:29.401 Copy: Supported 00:15:29.401 Volatile Write Cache: Present 00:15:29.401 Atomic Write Unit (Normal): 1 00:15:29.401 Atomic Write Unit (PFail): 1 00:15:29.401 Atomic Compare & Write Unit: 1 00:15:29.401 Fused Compare & Write: Supported 00:15:29.401 Scatter-Gather List 00:15:29.401 SGL Command Set: Supported (Dword aligned) 00:15:29.401 SGL Keyed: Not Supported 00:15:29.401 SGL Bit Bucket Descriptor: Not Supported 00:15:29.401 SGL Metadata Pointer: Not Supported 00:15:29.401 Oversized SGL: Not Supported 00:15:29.401 SGL Metadata Address: Not Supported 00:15:29.401 SGL Offset: Not Supported 00:15:29.401 Transport SGL Data Block: Not Supported 00:15:29.401 Replay Protected Memory Block: Not Supported 00:15:29.401 00:15:29.401 Firmware Slot Information 00:15:29.401 ========================= 00:15:29.401 Active slot: 1 00:15:29.401 Slot 1 Firmware Revision: 24.09 00:15:29.401 00:15:29.401 00:15:29.401 Commands Supported and Effects 00:15:29.401 ============================== 00:15:29.401 Admin Commands 00:15:29.401 -------------- 00:15:29.401 Get Log Page (02h): Supported 00:15:29.401 Identify (06h): Supported 00:15:29.401 Abort (08h): Supported 00:15:29.401 Set Features (09h): Supported 00:15:29.401 Get Features (0Ah): Supported 00:15:29.401 Asynchronous Event Request (0Ch): Supported 00:15:29.401 Keep Alive (18h): Supported 00:15:29.401 I/O Commands 00:15:29.401 ------------ 00:15:29.401 Flush (00h): Supported LBA-Change 00:15:29.401 Write (01h): Supported LBA-Change 00:15:29.401 Read (02h): Supported 00:15:29.401 Compare (05h): Supported 00:15:29.401 Write Zeroes (08h): Supported LBA-Change 00:15:29.401 Dataset Management (09h): Supported LBA-Change 00:15:29.401 Copy (19h): Supported LBA-Change 00:15:29.401 00:15:29.401 Error Log 00:15:29.401 ========= 00:15:29.401 00:15:29.401 Arbitration 00:15:29.401 =========== 00:15:29.401 Arbitration Burst: 1 00:15:29.401 00:15:29.401 Power Management 00:15:29.401 ================ 00:15:29.401 Number of Power States: 1 00:15:29.401 Current Power State: Power State #0 00:15:29.401 Power State #0: 00:15:29.401 Max Power: 0.00 W 00:15:29.401 Non-Operational State: Operational 00:15:29.401 Entry Latency: Not Reported 00:15:29.401 Exit Latency: Not Reported 00:15:29.401 Relative Read Throughput: 0 00:15:29.401 Relative Read Latency: 0 00:15:29.401 Relative Write Throughput: 0 00:15:29.401 Relative Write Latency: 0 00:15:29.401 Idle Power: Not Reported 00:15:29.401 Active Power: Not Reported 00:15:29.402 Non-Operational Permissive Mode: Not Supported 00:15:29.402 00:15:29.402 Health Information 00:15:29.402 ================== 00:15:29.402 Critical Warnings: 00:15:29.402 Available Spare Space: OK 00:15:29.402 Temperature: OK 00:15:29.402 Device Reliability: OK 00:15:29.402 Read Only: No 00:15:29.402 Volatile Memory Backup: OK 00:15:29.402 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:29.402 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:29.402 Available Spare: 0% 00:15:29.402 Available Sp[2024-07-26 11:23:24.887645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:29.402 [2024-07-26 11:23:24.895438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:29.402 [2024-07-26 11:23:24.895497] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:29.402 [2024-07-26 11:23:24.895518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.402 [2024-07-26 11:23:24.895530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.402 [2024-07-26 11:23:24.895541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.402 [2024-07-26 11:23:24.895552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.402 [2024-07-26 11:23:24.895650] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:29.402 [2024-07-26 11:23:24.895674] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:29.402 [2024-07-26 11:23:24.896652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.402 [2024-07-26 11:23:24.896732] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:29.402 [2024-07-26 11:23:24.896748] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:29.402 [2024-07-26 11:23:24.897660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:29.402 [2024-07-26 11:23:24.897688] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:29.402 [2024-07-26 11:23:24.897748] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:29.402 [2024-07-26 11:23:24.899096] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.402 are Threshold: 0% 00:15:29.402 Life Percentage Used: 0% 00:15:29.402 Data Units Read: 0 00:15:29.402 Data Units Written: 0 00:15:29.402 Host Read Commands: 0 00:15:29.402 Host Write Commands: 0 00:15:29.402 Controller Busy Time: 0 minutes 00:15:29.402 Power Cycles: 0 00:15:29.402 Power On Hours: 0 hours 00:15:29.402 Unsafe Shutdowns: 0 00:15:29.402 Unrecoverable Media Errors: 0 00:15:29.402 Lifetime Error Log Entries: 0 00:15:29.402 Warning Temperature Time: 0 minutes 00:15:29.402 Critical Temperature Time: 0 minutes 00:15:29.402 00:15:29.402 Number of Queues 00:15:29.402 ================ 00:15:29.402 Number of I/O Submission Queues: 127 00:15:29.402 Number of I/O Completion Queues: 127 00:15:29.402 00:15:29.402 Active Namespaces 00:15:29.402 ================= 00:15:29.402 Namespace ID:1 00:15:29.402 Error Recovery Timeout: Unlimited 00:15:29.402 Command Set Identifier: NVM (00h) 00:15:29.402 Deallocate: Supported 00:15:29.402 Deallocated/Unwritten Error: Not Supported 00:15:29.402 Deallocated Read Value: Unknown 00:15:29.402 Deallocate in Write Zeroes: Not Supported 00:15:29.402 Deallocated Guard Field: 0xFFFF 00:15:29.402 Flush: Supported 00:15:29.402 Reservation: Supported 00:15:29.402 Namespace Sharing Capabilities: Multiple Controllers 00:15:29.402 Size (in LBAs): 131072 (0GiB) 00:15:29.402 Capacity (in LBAs): 131072 (0GiB) 00:15:29.402 Utilization (in LBAs): 131072 (0GiB) 00:15:29.402 NGUID: A91277010BAA431280B5CF4DE97CC7FA 00:15:29.402 UUID: a9127701-0baa-4312-80b5-cf4de97cc7fa 00:15:29.402 Thin Provisioning: Not Supported 00:15:29.402 Per-NS Atomic Units: Yes 00:15:29.402 Atomic Boundary Size (Normal): 0 00:15:29.402 Atomic Boundary Size (PFail): 0 00:15:29.402 Atomic Boundary Offset: 0 00:15:29.402 Maximum Single Source Range Length: 65535 00:15:29.402 Maximum Copy Length: 65535 00:15:29.402 Maximum Source Range Count: 1 00:15:29.402 NGUID/EUI64 Never Reused: No 00:15:29.402 Namespace Write Protected: No 00:15:29.402 Number of LBA Formats: 1 00:15:29.402 Current LBA Format: LBA Format #00 00:15:29.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:29.402 00:15:29.402 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:29.402 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.703 [2024-07-26 11:23:25.160533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.961 Initializing NVMe Controllers 00:15:34.961 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.961 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.961 Initialization complete. Launching workers. 00:15:34.961 ======================================================== 00:15:34.961 Latency(us) 00:15:34.961 Device Information : IOPS MiB/s Average min max 00:15:34.961 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 25687.01 100.34 4982.49 1375.99 7605.86 00:15:34.961 ======================================================== 00:15:34.961 Total : 25687.01 100.34 4982.49 1375.99 7605.86 00:15:34.961 00:15:34.961 [2024-07-26 11:23:30.266771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.961 11:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:34.961 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.961 [2024-07-26 11:23:30.527550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.224 Initializing NVMe Controllers 00:15:40.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:40.224 Initialization complete. Launching workers. 00:15:40.224 ======================================================== 00:15:40.224 Latency(us) 00:15:40.224 Device Information : IOPS MiB/s Average min max 00:15:40.224 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24464.62 95.56 5230.77 1397.38 10566.65 00:15:40.224 ======================================================== 00:15:40.224 Total : 24464.62 95.56 5230.77 1397.38 10566.65 00:15:40.224 00:15:40.224 [2024-07-26 11:23:35.547450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.224 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:40.224 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.224 [2024-07-26 11:23:35.775549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.484 [2024-07-26 11:23:40.908567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.484 Initializing NVMe Controllers 00:15:45.484 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.484 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:45.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:45.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:45.484 Initialization complete. Launching workers. 00:15:45.484 Starting thread on core 2 00:15:45.484 Starting thread on core 3 00:15:45.484 Starting thread on core 1 00:15:45.484 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:45.484 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.742 [2024-07-26 11:23:41.225926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.021 [2024-07-26 11:23:44.293265] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.021 Initializing NVMe Controllers 00:15:49.021 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.021 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.021 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:49.021 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:49.021 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:49.021 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:49.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:49.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:49.021 Initialization complete. Launching workers. 00:15:49.021 Starting thread on core 1 with urgent priority queue 00:15:49.021 Starting thread on core 2 with urgent priority queue 00:15:49.021 Starting thread on core 3 with urgent priority queue 00:15:49.021 Starting thread on core 0 with urgent priority queue 00:15:49.022 SPDK bdev Controller (SPDK2 ) core 0: 3434.00 IO/s 29.12 secs/100000 ios 00:15:49.022 SPDK bdev Controller (SPDK2 ) core 1: 3705.33 IO/s 26.99 secs/100000 ios 00:15:49.022 SPDK bdev Controller (SPDK2 ) core 2: 3997.67 IO/s 25.01 secs/100000 ios 00:15:49.022 SPDK bdev Controller (SPDK2 ) core 3: 2938.67 IO/s 34.03 secs/100000 ios 00:15:49.022 ======================================================== 00:15:49.022 00:15:49.022 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:49.022 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.022 [2024-07-26 11:23:44.621018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.022 Initializing NVMe Controllers 00:15:49.022 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.022 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.022 Namespace ID: 1 size: 0GB 00:15:49.022 Initialization complete. 00:15:49.022 INFO: using host memory buffer for IO 00:15:49.022 Hello world! 00:15:49.022 [2024-07-26 11:23:44.630079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.279 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:49.279 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.536 [2024-07-26 11:23:44.949038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.469 Initializing NVMe Controllers 00:15:50.469 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.469 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.469 Initialization complete. Launching workers. 00:15:50.469 submit (in ns) avg, min, max = 8424.4, 4207.4, 4007416.3 00:15:50.469 complete (in ns) avg, min, max = 31663.6, 2468.1, 5000000.0 00:15:50.469 00:15:50.469 Submit histogram 00:15:50.469 ================ 00:15:50.469 Range in us Cumulative Count 00:15:50.469 4.196 - 4.219: 0.0085% ( 1) 00:15:50.469 4.219 - 4.243: 0.1872% ( 21) 00:15:50.469 4.243 - 4.267: 1.0129% ( 97) 00:15:50.469 4.267 - 4.290: 3.6939% ( 315) 00:15:50.469 4.290 - 4.314: 9.0050% ( 624) 00:15:50.469 4.314 - 4.338: 16.9206% ( 930) 00:15:50.469 4.338 - 4.361: 25.5426% ( 1013) 00:15:50.469 4.361 - 4.385: 31.7559% ( 730) 00:15:50.469 4.385 - 4.409: 35.4073% ( 429) 00:15:50.469 4.409 - 4.433: 36.7436% ( 157) 00:15:50.469 4.433 - 4.456: 37.4926% ( 88) 00:15:50.469 4.456 - 4.480: 38.4884% ( 117) 00:15:50.469 4.480 - 4.504: 40.8716% ( 280) 00:15:50.469 4.504 - 4.527: 44.6251% ( 441) 00:15:50.469 4.527 - 4.551: 49.1531% ( 532) 00:15:50.469 4.551 - 4.575: 53.0003% ( 452) 00:15:50.469 4.575 - 4.599: 55.5452% ( 299) 00:15:50.469 4.599 - 4.622: 56.9155% ( 161) 00:15:50.469 4.622 - 4.646: 57.6645% ( 88) 00:15:50.469 4.646 - 4.670: 58.1411% ( 56) 00:15:50.469 4.670 - 4.693: 58.5497% ( 48) 00:15:50.469 4.693 - 4.717: 58.9837% ( 51) 00:15:50.469 4.717 - 4.741: 59.7072% ( 85) 00:15:50.469 4.741 - 4.764: 60.2264% ( 61) 00:15:50.469 4.764 - 4.788: 60.5669% ( 40) 00:15:50.469 4.788 - 4.812: 60.6860% ( 14) 00:15:50.469 4.812 - 4.836: 60.8052% ( 14) 00:15:50.469 4.836 - 4.859: 60.8733% ( 8) 00:15:50.469 4.859 - 4.883: 61.6648% ( 93) 00:15:50.469 4.883 - 4.907: 66.4227% ( 559) 00:15:50.469 4.907 - 4.930: 77.4449% ( 1295) 00:15:50.469 4.930 - 4.954: 88.3479% ( 1281) 00:15:50.469 4.954 - 4.978: 95.0549% ( 788) 00:15:50.469 4.978 - 5.001: 96.0252% ( 114) 00:15:50.469 5.001 - 5.025: 96.4508% ( 50) 00:15:50.469 5.025 - 5.049: 96.7316% ( 33) 00:15:50.469 5.049 - 5.073: 96.9189% ( 22) 00:15:50.469 5.073 - 5.096: 96.9785% ( 7) 00:15:50.469 5.096 - 5.120: 97.0636% ( 10) 00:15:50.469 5.120 - 5.144: 97.1402% ( 9) 00:15:50.469 5.144 - 5.167: 97.2764% ( 16) 00:15:50.469 5.167 - 5.191: 97.5062% ( 27) 00:15:50.469 5.191 - 5.215: 97.7530% ( 29) 00:15:50.469 5.215 - 5.239: 97.9743% ( 26) 00:15:50.469 5.239 - 5.262: 98.0424% ( 8) 00:15:50.469 5.262 - 5.286: 98.0935% ( 6) 00:15:50.469 5.286 - 5.310: 98.1360% ( 5) 00:15:50.469 5.310 - 5.333: 98.1956% ( 7) 00:15:50.469 5.333 - 5.357: 98.2296% ( 4) 00:15:50.469 5.357 - 5.381: 98.2977% ( 8) 00:15:50.469 5.381 - 5.404: 98.3148% ( 2) 00:15:50.469 5.404 - 5.428: 98.3403% ( 3) 00:15:50.469 5.428 - 5.452: 98.3999% ( 7) 00:15:50.469 5.452 - 5.476: 98.4339% ( 4) 00:15:50.469 5.476 - 5.499: 98.4765% ( 5) 00:15:50.469 5.499 - 5.523: 98.5446% ( 8) 00:15:50.469 5.523 - 5.547: 98.5701% ( 3) 00:15:50.469 5.547 - 5.570: 98.5871% ( 2) 00:15:50.469 5.570 - 5.594: 98.6212% ( 4) 00:15:50.469 5.594 - 5.618: 98.6297% ( 1) 00:15:50.469 5.618 - 5.641: 98.6552% ( 3) 00:15:50.469 5.641 - 5.665: 98.6807% ( 3) 00:15:50.469 5.665 - 5.689: 98.6893% ( 1) 00:15:50.469 5.689 - 5.713: 98.7148% ( 3) 00:15:50.469 5.713 - 5.736: 98.7318% ( 2) 00:15:50.469 5.736 - 5.760: 98.7573% ( 3) 00:15:50.469 5.760 - 5.784: 98.7744% ( 2) 00:15:50.469 5.784 - 5.807: 98.7914% ( 2) 00:15:50.469 5.807 - 5.831: 98.7999% ( 1) 00:15:50.469 5.831 - 5.855: 98.8425% ( 5) 00:15:50.469 5.855 - 5.879: 98.8595% ( 2) 00:15:50.469 5.879 - 5.902: 98.8850% ( 3) 00:15:50.469 5.902 - 5.926: 98.8935% ( 1) 00:15:50.469 5.926 - 5.950: 98.9020% ( 1) 00:15:50.469 6.044 - 6.068: 98.9191% ( 2) 00:15:50.469 6.068 - 6.116: 98.9276% ( 1) 00:15:50.469 6.116 - 6.163: 98.9361% ( 1) 00:15:50.469 6.163 - 6.210: 98.9446% ( 1) 00:15:50.469 6.210 - 6.258: 98.9786% ( 4) 00:15:50.469 6.258 - 6.305: 98.9957% ( 2) 00:15:50.469 6.305 - 6.353: 99.0127% ( 2) 00:15:50.469 6.353 - 6.400: 99.0467% ( 4) 00:15:50.469 6.400 - 6.447: 99.0638% ( 2) 00:15:50.469 6.590 - 6.637: 99.0808% ( 2) 00:15:50.469 6.637 - 6.684: 99.0893% ( 1) 00:15:50.469 6.684 - 6.732: 99.0978% ( 1) 00:15:50.469 6.732 - 6.779: 99.1063% ( 1) 00:15:50.469 6.874 - 6.921: 99.1148% ( 1) 00:15:50.469 6.969 - 7.016: 99.1233% ( 1) 00:15:50.469 7.064 - 7.111: 99.1318% ( 1) 00:15:50.469 7.253 - 7.301: 99.1404% ( 1) 00:15:50.469 7.633 - 7.680: 99.1489% ( 1) 00:15:50.469 7.775 - 7.822: 99.1574% ( 1) 00:15:50.469 7.822 - 7.870: 99.1659% ( 1) 00:15:50.469 7.917 - 7.964: 99.1744% ( 1) 00:15:50.469 7.964 - 8.012: 99.1914% ( 2) 00:15:50.469 8.012 - 8.059: 99.1999% ( 1) 00:15:50.469 8.201 - 8.249: 99.2084% ( 1) 00:15:50.469 8.249 - 8.296: 99.2255% ( 2) 00:15:50.469 8.296 - 8.344: 99.2340% ( 1) 00:15:50.469 8.344 - 8.391: 99.2425% ( 1) 00:15:50.469 8.391 - 8.439: 99.2510% ( 1) 00:15:50.469 8.486 - 8.533: 99.2595% ( 1) 00:15:50.469 8.581 - 8.628: 99.2680% ( 1) 00:15:50.469 8.676 - 8.723: 99.2765% ( 1) 00:15:50.469 8.723 - 8.770: 99.2936% ( 2) 00:15:50.469 8.770 - 8.818: 99.3021% ( 1) 00:15:50.469 8.865 - 8.913: 99.3106% ( 1) 00:15:50.469 8.913 - 8.960: 99.3191% ( 1) 00:15:50.469 9.007 - 9.055: 99.3276% ( 1) 00:15:50.469 9.102 - 9.150: 99.3446% ( 2) 00:15:50.469 9.197 - 9.244: 99.3616% ( 2) 00:15:50.469 9.244 - 9.292: 99.3702% ( 1) 00:15:50.469 9.339 - 9.387: 99.3787% ( 1) 00:15:50.469 9.387 - 9.434: 99.3957% ( 2) 00:15:50.469 9.434 - 9.481: 99.4042% ( 1) 00:15:50.469 9.481 - 9.529: 99.4297% ( 3) 00:15:50.469 9.529 - 9.576: 99.4468% ( 2) 00:15:50.469 9.576 - 9.624: 99.4638% ( 2) 00:15:50.469 9.624 - 9.671: 99.4723% ( 1) 00:15:50.469 9.671 - 9.719: 99.4893% ( 2) 00:15:50.469 9.766 - 9.813: 99.4978% ( 1) 00:15:50.469 9.861 - 9.908: 99.5149% ( 2) 00:15:50.469 9.908 - 9.956: 99.5404% ( 3) 00:15:50.469 10.098 - 10.145: 99.5489% ( 1) 00:15:50.469 10.145 - 10.193: 99.5574% ( 1) 00:15:50.469 10.193 - 10.240: 99.5829% ( 3) 00:15:50.469 10.335 - 10.382: 99.5915% ( 1) 00:15:50.469 10.382 - 10.430: 99.6085% ( 2) 00:15:50.469 10.572 - 10.619: 99.6170% ( 1) 00:15:50.469 10.619 - 10.667: 99.6425% ( 3) 00:15:50.469 10.667 - 10.714: 99.6681% ( 3) 00:15:50.469 10.809 - 10.856: 99.6766% ( 1) 00:15:50.469 10.856 - 10.904: 99.6936% ( 2) 00:15:50.469 10.999 - 11.046: 99.7021% ( 1) 00:15:50.469 11.046 - 11.093: 99.7106% ( 1) 00:15:50.469 11.330 - 11.378: 99.7191% ( 1) 00:15:50.470 11.615 - 11.662: 99.7276% ( 1) 00:15:50.470 11.662 - 11.710: 99.7447% ( 2) 00:15:50.470 11.710 - 11.757: 99.7532% ( 1) 00:15:50.470 11.757 - 11.804: 99.7617% ( 1) 00:15:50.470 11.899 - 11.947: 99.7702% ( 1) 00:15:50.470 11.947 - 11.994: 99.7787% ( 1) 00:15:50.470 12.231 - 12.326: 99.7872% ( 1) 00:15:50.470 12.610 - 12.705: 99.7957% ( 1) 00:15:50.470 12.705 - 12.800: 99.8128% ( 2) 00:15:50.470 12.895 - 12.990: 99.8213% ( 1) 00:15:50.470 13.464 - 13.559: 99.8383% ( 2) 00:15:50.470 13.559 - 13.653: 99.8553% ( 2) 00:15:50.470 14.601 - 14.696: 99.8638% ( 1) 00:15:50.470 14.981 - 15.076: 99.8723% ( 1) 00:15:50.470 15.076 - 15.170: 99.8808% ( 1) 00:15:50.470 15.360 - 15.455: 99.8979% ( 2) 00:15:50.470 16.213 - 16.308: 99.9064% ( 1) 00:15:50.470 3980.705 - 4004.978: 99.9745% ( 8) 00:15:50.470 4004.978 - 4029.250: 100.0000% ( 3) 00:15:50.470 00:15:50.470 Complete histogram 00:15:50.470 ================== 00:15:50.470 Range in us Cumulative Count 00:15:50.470 2.465 - 2.477: 0.9362% ( 110) 00:15:50.470 2.477 - 2.489: 26.1214% ( 2959) 00:15:50.470 2.489 - 2.501: 57.7326% ( 3714) 00:15:50.470 2.501 - 2.513: 61.7074% ( 467) 00:15:50.470 2.513 - 2.524: 71.4699% ( 1147) 00:15:50.470 2.524 - 2.536: 88.5352% ( 2005) 00:15:50.470 2.536 - 2.548: 93.6761% ( 604) 00:15:50.470 2.548 - 2.560: 96.6465% ( 349) 00:15:50.470 2.560 - 2.5[2024-07-26 11:23:46.044517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.470 72: 97.7275% ( 127) 00:15:50.470 2.572 - 2.584: 98.1530% ( 50) 00:15:50.470 2.584 - 2.596: 98.3828% ( 27) 00:15:50.470 2.596 - 2.607: 98.5616% ( 21) 00:15:50.470 2.607 - 2.619: 98.6297% ( 8) 00:15:50.470 2.619 - 2.631: 98.6807% ( 6) 00:15:50.470 2.655 - 2.667: 98.6893% ( 1) 00:15:50.470 2.667 - 2.679: 98.6978% ( 1) 00:15:50.470 2.714 - 2.726: 98.7063% ( 1) 00:15:50.470 2.726 - 2.738: 98.7148% ( 1) 00:15:50.470 2.738 - 2.750: 98.7403% ( 3) 00:15:50.470 2.750 - 2.761: 98.7488% ( 1) 00:15:50.470 2.761 - 2.773: 98.7659% ( 2) 00:15:50.470 2.773 - 2.785: 98.7744% ( 1) 00:15:50.470 2.785 - 2.797: 98.7999% ( 3) 00:15:50.470 2.797 - 2.809: 98.8084% ( 1) 00:15:50.470 2.892 - 2.904: 98.8169% ( 1) 00:15:50.470 2.904 - 2.916: 98.8254% ( 1) 00:15:50.470 2.939 - 2.951: 98.8339% ( 1) 00:15:50.470 2.951 - 2.963: 98.8425% ( 1) 00:15:50.470 2.987 - 2.999: 98.8510% ( 1) 00:15:50.470 3.022 - 3.034: 98.8595% ( 1) 00:15:50.470 3.176 - 3.200: 98.8680% ( 1) 00:15:50.470 3.200 - 3.224: 98.8765% ( 1) 00:15:50.470 3.342 - 3.366: 98.8850% ( 1) 00:15:50.470 3.437 - 3.461: 98.8935% ( 1) 00:15:50.470 3.461 - 3.484: 98.9105% ( 2) 00:15:50.470 3.484 - 3.508: 98.9191% ( 1) 00:15:50.470 3.508 - 3.532: 98.9531% ( 4) 00:15:50.470 3.532 - 3.556: 98.9701% ( 2) 00:15:50.470 3.556 - 3.579: 99.0127% ( 5) 00:15:50.470 3.579 - 3.603: 99.0212% ( 1) 00:15:50.470 3.603 - 3.627: 99.0382% ( 2) 00:15:50.470 3.627 - 3.650: 99.0467% ( 1) 00:15:50.470 3.650 - 3.674: 99.0552% ( 1) 00:15:50.470 3.721 - 3.745: 99.0638% ( 1) 00:15:50.470 3.793 - 3.816: 99.0723% ( 1) 00:15:50.470 4.006 - 4.030: 99.0808% ( 1) 00:15:50.470 4.124 - 4.148: 99.0893% ( 1) 00:15:50.470 4.599 - 4.622: 99.0978% ( 1) 00:15:50.470 5.001 - 5.025: 99.1063% ( 1) 00:15:50.470 5.144 - 5.167: 99.1148% ( 1) 00:15:50.470 5.191 - 5.215: 99.1233% ( 1) 00:15:50.470 6.116 - 6.163: 99.1318% ( 1) 00:15:50.470 6.732 - 6.779: 99.1404% ( 1) 00:15:50.470 6.827 - 6.874: 99.1489% ( 1) 00:15:50.470 7.111 - 7.159: 99.1574% ( 1) 00:15:50.470 7.159 - 7.206: 99.1659% ( 1) 00:15:50.470 7.206 - 7.253: 99.1744% ( 1) 00:15:50.470 7.253 - 7.301: 99.1829% ( 1) 00:15:50.470 7.633 - 7.680: 99.1914% ( 1) 00:15:50.470 7.870 - 7.917: 99.1999% ( 1) 00:15:50.470 8.154 - 8.201: 99.2084% ( 1) 00:15:50.470 8.486 - 8.533: 99.2170% ( 1) 00:15:50.470 8.770 - 8.818: 99.2255% ( 1) 00:15:50.470 8.865 - 8.913: 99.2340% ( 1) 00:15:50.470 9.055 - 9.102: 99.2425% ( 1) 00:15:50.470 9.102 - 9.150: 99.2510% ( 1) 00:15:50.470 9.813 - 9.861: 99.2595% ( 1) 00:15:50.470 10.714 - 10.761: 99.2680% ( 1) 00:15:50.470 3009.801 - 3021.938: 99.2765% ( 1) 00:15:50.470 3021.938 - 3034.074: 99.2850% ( 1) 00:15:50.470 3398.163 - 3422.436: 99.2936% ( 1) 00:15:50.470 3980.705 - 4004.978: 99.9064% ( 72) 00:15:50.470 4004.978 - 4029.250: 99.9915% ( 10) 00:15:50.470 4975.881 - 5000.154: 100.0000% ( 1) 00:15:50.470 00:15:50.470 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:50.470 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:50.470 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:50.470 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:50.470 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:50.726 [ 00:15:50.726 { 00:15:50.726 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.726 "subtype": "Discovery", 00:15:50.726 "listen_addresses": [], 00:15:50.726 "allow_any_host": true, 00:15:50.726 "hosts": [] 00:15:50.726 }, 00:15:50.726 { 00:15:50.726 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:50.726 "subtype": "NVMe", 00:15:50.726 "listen_addresses": [ 00:15:50.726 { 00:15:50.726 "trtype": "VFIOUSER", 00:15:50.726 "adrfam": "IPv4", 00:15:50.726 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:50.726 "trsvcid": "0" 00:15:50.726 } 00:15:50.726 ], 00:15:50.726 "allow_any_host": true, 00:15:50.726 "hosts": [], 00:15:50.726 "serial_number": "SPDK1", 00:15:50.726 "model_number": "SPDK bdev Controller", 00:15:50.726 "max_namespaces": 32, 00:15:50.726 "min_cntlid": 1, 00:15:50.726 "max_cntlid": 65519, 00:15:50.726 "namespaces": [ 00:15:50.726 { 00:15:50.726 "nsid": 1, 00:15:50.726 "bdev_name": "Malloc1", 00:15:50.726 "name": "Malloc1", 00:15:50.726 "nguid": "30378F02A7364B598A326725419926CC", 00:15:50.726 "uuid": "30378f02-a736-4b59-8a32-6725419926cc" 00:15:50.726 }, 00:15:50.726 { 00:15:50.726 "nsid": 2, 00:15:50.726 "bdev_name": "Malloc3", 00:15:50.726 "name": "Malloc3", 00:15:50.726 "nguid": "995596FC78474800A990232D9FC207E2", 00:15:50.727 "uuid": "995596fc-7847-4800-a990-232d9fc207e2" 00:15:50.727 } 00:15:50.727 ] 00:15:50.727 }, 00:15:50.727 { 00:15:50.727 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:50.727 "subtype": "NVMe", 00:15:50.727 "listen_addresses": [ 00:15:50.727 { 00:15:50.727 "trtype": "VFIOUSER", 00:15:50.727 "adrfam": "IPv4", 00:15:50.727 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:50.727 "trsvcid": "0" 00:15:50.727 } 00:15:50.727 ], 00:15:50.727 "allow_any_host": true, 00:15:50.727 "hosts": [], 00:15:50.727 "serial_number": "SPDK2", 00:15:50.727 "model_number": "SPDK bdev Controller", 00:15:50.727 "max_namespaces": 32, 00:15:50.727 "min_cntlid": 1, 00:15:50.727 "max_cntlid": 65519, 00:15:50.727 "namespaces": [ 00:15:50.727 { 00:15:50.727 "nsid": 1, 00:15:50.727 "bdev_name": "Malloc2", 00:15:50.727 "name": "Malloc2", 00:15:50.727 "nguid": "A91277010BAA431280B5CF4DE97CC7FA", 00:15:50.727 "uuid": "a9127701-0baa-4312-80b5-cf4de97cc7fa" 00:15:50.727 } 00:15:50.727 ] 00:15:50.727 } 00:15:50.727 ] 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2101224 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:50.984 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:50.984 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.984 [2024-07-26 11:23:46.566951] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:51.241 Malloc4 00:15:51.241 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:51.523 [2024-07-26 11:23:47.008543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:51.523 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:51.523 Asynchronous Event Request test 00:15:51.523 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.523 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.523 Registering asynchronous event callbacks... 00:15:51.523 Starting namespace attribute notice tests for all controllers... 00:15:51.523 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:51.523 aer_cb - Changed Namespace 00:15:51.523 Cleaning up... 00:15:51.781 [ 00:15:51.781 { 00:15:51.781 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:51.781 "subtype": "Discovery", 00:15:51.781 "listen_addresses": [], 00:15:51.781 "allow_any_host": true, 00:15:51.781 "hosts": [] 00:15:51.781 }, 00:15:51.781 { 00:15:51.781 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:51.781 "subtype": "NVMe", 00:15:51.781 "listen_addresses": [ 00:15:51.781 { 00:15:51.781 "trtype": "VFIOUSER", 00:15:51.781 "adrfam": "IPv4", 00:15:51.781 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:51.781 "trsvcid": "0" 00:15:51.781 } 00:15:51.781 ], 00:15:51.781 "allow_any_host": true, 00:15:51.781 "hosts": [], 00:15:51.781 "serial_number": "SPDK1", 00:15:51.781 "model_number": "SPDK bdev Controller", 00:15:51.781 "max_namespaces": 32, 00:15:51.781 "min_cntlid": 1, 00:15:51.781 "max_cntlid": 65519, 00:15:51.781 "namespaces": [ 00:15:51.781 { 00:15:51.781 "nsid": 1, 00:15:51.781 "bdev_name": "Malloc1", 00:15:51.781 "name": "Malloc1", 00:15:51.781 "nguid": "30378F02A7364B598A326725419926CC", 00:15:51.781 "uuid": "30378f02-a736-4b59-8a32-6725419926cc" 00:15:51.781 }, 00:15:51.781 { 00:15:51.781 "nsid": 2, 00:15:51.781 "bdev_name": "Malloc3", 00:15:51.781 "name": "Malloc3", 00:15:51.781 "nguid": "995596FC78474800A990232D9FC207E2", 00:15:51.781 "uuid": "995596fc-7847-4800-a990-232d9fc207e2" 00:15:51.781 } 00:15:51.781 ] 00:15:51.781 }, 00:15:51.781 { 00:15:51.781 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:51.781 "subtype": "NVMe", 00:15:51.781 "listen_addresses": [ 00:15:51.781 { 00:15:51.781 "trtype": "VFIOUSER", 00:15:51.781 "adrfam": "IPv4", 00:15:51.781 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:51.781 "trsvcid": "0" 00:15:51.781 } 00:15:51.781 ], 00:15:51.781 "allow_any_host": true, 00:15:51.781 "hosts": [], 00:15:51.781 "serial_number": "SPDK2", 00:15:51.781 "model_number": "SPDK bdev Controller", 00:15:51.781 "max_namespaces": 32, 00:15:51.781 "min_cntlid": 1, 00:15:51.781 "max_cntlid": 65519, 00:15:51.781 "namespaces": [ 00:15:51.781 { 00:15:51.781 "nsid": 1, 00:15:51.781 "bdev_name": "Malloc2", 00:15:51.781 "name": "Malloc2", 00:15:51.781 "nguid": "A91277010BAA431280B5CF4DE97CC7FA", 00:15:51.781 "uuid": "a9127701-0baa-4312-80b5-cf4de97cc7fa" 00:15:51.781 }, 00:15:51.781 { 00:15:51.781 "nsid": 2, 00:15:51.781 "bdev_name": "Malloc4", 00:15:51.781 "name": "Malloc4", 00:15:51.781 "nguid": "1D318E4040C0435C915D0FB459D5A81E", 00:15:51.781 "uuid": "1d318e40-40c0-435c-915d-0fb459d5a81e" 00:15:51.781 } 00:15:51.781 ] 00:15:51.781 } 00:15:51.781 ] 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2101224 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2095497 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2095497 ']' 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2095497 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2095497 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2095497' 00:15:51.781 killing process with pid 2095497 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2095497 00:15:51.781 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2095497 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2101371 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2101371' 00:15:52.386 Process pid: 2101371 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2101371 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2101371 ']' 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.386 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:52.386 [2024-07-26 11:23:47.857014] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:52.386 [2024-07-26 11:23:47.858270] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:15:52.386 [2024-07-26 11:23:47.858338] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.386 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.386 [2024-07-26 11:23:47.930753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.648 [2024-07-26 11:23:48.058610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.648 [2024-07-26 11:23:48.058671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.648 [2024-07-26 11:23:48.058688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.648 [2024-07-26 11:23:48.058702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.648 [2024-07-26 11:23:48.058713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.648 [2024-07-26 11:23:48.058775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.648 [2024-07-26 11:23:48.058804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.648 [2024-07-26 11:23:48.058858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.648 [2024-07-26 11:23:48.058862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.648 [2024-07-26 11:23:48.174557] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:52.648 [2024-07-26 11:23:48.174775] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:52.648 [2024-07-26 11:23:48.175111] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:52.648 [2024-07-26 11:23:48.175811] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:52.648 [2024-07-26 11:23:48.176081] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:52.648 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.648 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:52.648 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:53.580 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:54.146 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:54.146 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:54.146 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:54.146 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:54.146 11:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:54.405 Malloc1 00:15:54.663 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:54.922 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:55.488 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:55.745 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:55.745 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:55.745 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:56.311 Malloc2 00:15:56.311 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:56.569 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:56.827 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2101371 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2101371 ']' 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2101371 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101371 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101371' 00:15:57.085 killing process with pid 2101371 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2101371 00:15:57.085 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2101371 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:57.652 00:15:57.652 real 0m56.212s 00:15:57.652 user 3m41.315s 00:15:57.652 sys 0m5.324s 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:57.652 ************************************ 00:15:57.652 END TEST nvmf_vfio_user 00:15:57.652 ************************************ 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.652 ************************************ 00:15:57.652 START TEST nvmf_vfio_user_nvme_compliance 00:15:57.652 ************************************ 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:57.652 * Looking for test storage... 00:15:57.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.652 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2102098 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2102098' 00:15:57.653 Process pid: 2102098 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2102098 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2102098 ']' 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.653 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.653 [2024-07-26 11:23:53.252606] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:15:57.653 [2024-07-26 11:23:53.252699] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.653 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.911 [2024-07-26 11:23:53.324320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.911 [2024-07-26 11:23:53.449575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.911 [2024-07-26 11:23:53.449648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.911 [2024-07-26 11:23:53.449665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.911 [2024-07-26 11:23:53.449679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.911 [2024-07-26 11:23:53.449690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.911 [2024-07-26 11:23:53.449755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.911 [2024-07-26 11:23:53.449788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.911 [2024-07-26 11:23:53.449792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.168 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.168 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:58.168 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 malloc0 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.102 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:59.360 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.360 00:15:59.360 00:15:59.360 CUnit - A unit testing framework for C - Version 2.1-3 00:15:59.360 http://cunit.sourceforge.net/ 00:15:59.360 00:15:59.360 00:15:59.360 Suite: nvme_compliance 00:15:59.360 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 11:23:54.981073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.360 [2024-07-26 11:23:54.982612] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:59.360 [2024-07-26 11:23:54.982642] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:59.360 [2024-07-26 11:23:54.982656] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:59.360 [2024-07-26 11:23:54.984096] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.618 passed 00:15:59.618 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 11:23:55.077795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.618 [2024-07-26 11:23:55.080817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.618 passed 00:15:59.618 Test: admin_identify_ns ...[2024-07-26 11:23:55.175214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.618 [2024-07-26 11:23:55.234451] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:59.618 [2024-07-26 11:23:55.242451] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:59.618 [2024-07-26 11:23:55.263596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.875 passed 00:15:59.875 Test: admin_get_features_mandatory_features ...[2024-07-26 11:23:55.355954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.875 [2024-07-26 11:23:55.358984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.875 passed 00:15:59.875 Test: admin_get_features_optional_features ...[2024-07-26 11:23:55.451626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.875 [2024-07-26 11:23:55.454651] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.875 passed 00:16:00.133 Test: admin_set_features_number_of_queues ...[2024-07-26 11:23:55.545275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.133 [2024-07-26 11:23:55.649548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.133 passed 00:16:00.133 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 11:23:55.739541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.133 [2024-07-26 11:23:55.742567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.133 passed 00:16:00.391 Test: admin_get_log_page_with_lpo ...[2024-07-26 11:23:55.834262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.391 [2024-07-26 11:23:55.901447] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:00.391 [2024-07-26 11:23:55.914520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.391 passed 00:16:00.391 Test: fabric_property_get ...[2024-07-26 11:23:56.003537] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.391 [2024-07-26 11:23:56.004867] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:00.392 [2024-07-26 11:23:56.006553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.392 passed 00:16:00.649 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 11:23:56.100187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.649 [2024-07-26 11:23:56.101529] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:00.649 [2024-07-26 11:23:56.103207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.649 passed 00:16:00.649 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 11:23:56.192210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.649 [2024-07-26 11:23:56.275439] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:00.649 [2024-07-26 11:23:56.291456] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:00.649 [2024-07-26 11:23:56.296568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.907 passed 00:16:00.907 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 11:23:56.387955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.907 [2024-07-26 11:23:56.389301] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:00.907 [2024-07-26 11:23:56.390978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.907 passed 00:16:00.907 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 11:23:56.480637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.907 [2024-07-26 11:23:56.555438] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:01.165 [2024-07-26 11:23:56.579443] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:01.165 [2024-07-26 11:23:56.584556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.165 passed 00:16:01.165 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 11:23:56.676974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.165 [2024-07-26 11:23:56.678324] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:01.165 [2024-07-26 11:23:56.678370] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:01.165 [2024-07-26 11:23:56.680002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.165 passed 00:16:01.165 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 11:23:56.772722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.422 [2024-07-26 11:23:56.864440] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:01.422 [2024-07-26 11:23:56.872437] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:01.422 [2024-07-26 11:23:56.880444] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:01.422 [2024-07-26 11:23:56.888441] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:01.422 [2024-07-26 11:23:56.917566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.422 passed 00:16:01.422 Test: admin_create_io_sq_verify_pc ...[2024-07-26 11:23:57.007606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.422 [2024-07-26 11:23:57.026456] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:01.422 [2024-07-26 11:23:57.044126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.422 passed 00:16:01.679 Test: admin_create_io_qp_max_qps ...[2024-07-26 11:23:57.131752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.612 [2024-07-26 11:23:58.236446] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:03.177 [2024-07-26 11:23:58.622388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.177 passed 00:16:03.177 Test: admin_create_io_sq_shared_cq ...[2024-07-26 11:23:58.713219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.434 [2024-07-26 11:23:58.844438] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:03.434 [2024-07-26 11:23:58.881549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.434 passed 00:16:03.434 00:16:03.434 Run Summary: Type Total Ran Passed Failed Inactive 00:16:03.434 suites 1 1 n/a 0 0 00:16:03.434 tests 18 18 18 0 0 00:16:03.434 asserts 360 360 360 0 n/a 00:16:03.434 00:16:03.434 Elapsed time = 1.633 seconds 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2102098 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2102098 ']' 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2102098 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102098 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102098' 00:16:03.434 killing process with pid 2102098 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2102098 00:16:03.434 11:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2102098 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:03.692 00:16:03.692 real 0m6.201s 00:16:03.692 user 0m17.487s 00:16:03.692 sys 0m0.659s 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.692 ************************************ 00:16:03.692 END TEST nvmf_vfio_user_nvme_compliance 00:16:03.692 ************************************ 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.692 11:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.951 ************************************ 00:16:03.951 START TEST nvmf_vfio_user_fuzz 00:16:03.951 ************************************ 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:03.951 * Looking for test storage... 00:16:03.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2102822 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2102822' 00:16:03.951 Process pid: 2102822 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2102822 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2102822 ']' 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.951 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.209 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.209 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:04.209 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.579 malloc0 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:05.579 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:37.656 Fuzzing completed. Shutting down the fuzz application 00:16:37.656 00:16:37.656 Dumping successful admin opcodes: 00:16:37.656 8, 9, 10, 24, 00:16:37.656 Dumping successful io opcodes: 00:16:37.656 0, 00:16:37.656 NS: 0x200003a1ef00 I/O qp, Total commands completed: 588132, total successful commands: 2267, random_seed: 561058624 00:16:37.656 NS: 0x200003a1ef00 admin qp, Total commands completed: 75118, total successful commands: 587, random_seed: 4224583424 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2102822 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2102822 ']' 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2102822 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102822 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102822' 00:16:37.656 killing process with pid 2102822 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2102822 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2102822 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:37.656 00:16:37.656 real 0m32.499s 00:16:37.656 user 0m32.737s 00:16:37.656 sys 0m26.968s 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 ************************************ 00:16:37.656 END TEST nvmf_vfio_user_fuzz 00:16:37.656 ************************************ 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.656 ************************************ 00:16:37.656 START TEST nvmf_auth_target 00:16:37.656 ************************************ 00:16:37.656 11:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:37.656 * Looking for test storage... 00:16:37.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.656 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.656 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.657 11:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.031 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:39.032 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:39.032 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:39.032 Found net devices under 0000:84:00.0: cvl_0_0 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:39.032 Found net devices under 0000:84:00.1: cvl_0_1 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.032 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:16:39.291 00:16:39.291 --- 10.0.0.2 ping statistics --- 00:16:39.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.291 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:16:39.291 00:16:39.291 --- 10.0.0.1 ping statistics --- 00:16:39.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.291 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2108906 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2108906 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2108906 ']' 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.291 11:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2108972 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f0446457f4b20287d5e043d3fb11ed9f6b72401dadc175b4 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ONJ 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f0446457f4b20287d5e043d3fb11ed9f6b72401dadc175b4 0 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f0446457f4b20287d5e043d3fb11ed9f6b72401dadc175b4 0 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f0446457f4b20287d5e043d3fb11ed9f6b72401dadc175b4 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ONJ 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ONJ 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ONJ 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc50a10322d10a1df6acb4d106e979242b97df21c57e56165d7b0ab07892cd3b 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Le3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc50a10322d10a1df6acb4d106e979242b97df21c57e56165d7b0ab07892cd3b 3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc50a10322d10a1df6acb4d106e979242b97df21c57e56165d7b0ab07892cd3b 3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc50a10322d10a1df6acb4d106e979242b97df21c57e56165d7b0ab07892cd3b 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Le3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Le3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Le3 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0ba2d20002496ccdf90f200938689832 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9S6 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0ba2d20002496ccdf90f200938689832 1 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0ba2d20002496ccdf90f200938689832 1 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0ba2d20002496ccdf90f200938689832 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:39.857 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9S6 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9S6 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.9S6 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:40.115 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=93e9595b6969c304e4d76e9af6948aa3baf187c3c9e48786 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zFs 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 93e9595b6969c304e4d76e9af6948aa3baf187c3c9e48786 2 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 93e9595b6969c304e4d76e9af6948aa3baf187c3c9e48786 2 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=93e9595b6969c304e4d76e9af6948aa3baf187c3c9e48786 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zFs 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zFs 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.zFs 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c4d3bb00e630ea37e4ef5f57c9a2a413eec309fe737c74e7 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ojt 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c4d3bb00e630ea37e4ef5f57c9a2a413eec309fe737c74e7 2 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c4d3bb00e630ea37e4ef5f57c9a2a413eec309fe737c74e7 2 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c4d3bb00e630ea37e4ef5f57c9a2a413eec309fe737c74e7 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ojt 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ojt 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ojt 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9d96258a97eb4d454b071b6100e5679a 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t1p 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9d96258a97eb4d454b071b6100e5679a 1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9d96258a97eb4d454b071b6100e5679a 1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9d96258a97eb4d454b071b6100e5679a 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t1p 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t1p 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.t1p 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f2e7dbc29e5e7c0d7d2db7ee02a88342fc9741943f59a9737c14856391703438 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HxW 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f2e7dbc29e5e7c0d7d2db7ee02a88342fc9741943f59a9737c14856391703438 3 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f2e7dbc29e5e7c0d7d2db7ee02a88342fc9741943f59a9737c14856391703438 3 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f2e7dbc29e5e7c0d7d2db7ee02a88342fc9741943f59a9737c14856391703438 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HxW 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HxW 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.HxW 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2108906 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2108906 ']' 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.116 11:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.681 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2108972 /var/tmp/host.sock 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2108972 ']' 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:40.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.682 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ONJ 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ONJ 00:16:41.247 11:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ONJ 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Le3 ]] 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le3 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le3 00:16:41.504 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Le3 00:16:42.069 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:42.069 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9S6 00:16:42.069 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.069 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.069 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.069 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9S6 00:16:42.070 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9S6 00:16:42.327 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.zFs ]] 00:16:42.327 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zFs 00:16:42.327 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.327 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.584 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.584 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zFs 00:16:42.584 11:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zFs 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ojt 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ojt 00:16:42.841 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ojt 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.t1p ]] 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t1p 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t1p 00:16:43.406 11:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t1p 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HxW 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.HxW 00:16:43.664 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.HxW 00:16:43.921 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:43.921 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:43.921 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.921 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.921 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:43.921 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.179 11:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.743 00:16:44.743 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.743 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.743 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.001 { 00:16:45.001 "cntlid": 1, 00:16:45.001 "qid": 0, 00:16:45.001 "state": "enabled", 00:16:45.001 "thread": "nvmf_tgt_poll_group_000", 00:16:45.001 "listen_address": { 00:16:45.001 "trtype": "TCP", 00:16:45.001 "adrfam": "IPv4", 00:16:45.001 "traddr": "10.0.0.2", 00:16:45.001 "trsvcid": "4420" 00:16:45.001 }, 00:16:45.001 "peer_address": { 00:16:45.001 "trtype": "TCP", 00:16:45.001 "adrfam": "IPv4", 00:16:45.001 "traddr": "10.0.0.1", 00:16:45.001 "trsvcid": "53994" 00:16:45.001 }, 00:16:45.001 "auth": { 00:16:45.001 "state": "completed", 00:16:45.001 "digest": "sha256", 00:16:45.001 "dhgroup": "null" 00:16:45.001 } 00:16:45.001 } 00:16:45.001 ]' 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.001 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.566 11:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:46.498 11:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.094 11:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.660 00:16:47.660 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.660 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.660 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.916 { 00:16:47.916 "cntlid": 3, 00:16:47.916 "qid": 0, 00:16:47.916 "state": "enabled", 00:16:47.916 "thread": "nvmf_tgt_poll_group_000", 00:16:47.916 "listen_address": { 00:16:47.916 "trtype": "TCP", 00:16:47.916 "adrfam": "IPv4", 00:16:47.916 "traddr": "10.0.0.2", 00:16:47.916 "trsvcid": "4420" 00:16:47.916 }, 00:16:47.916 "peer_address": { 00:16:47.916 "trtype": "TCP", 00:16:47.916 "adrfam": "IPv4", 00:16:47.916 "traddr": "10.0.0.1", 00:16:47.916 "trsvcid": "54020" 00:16:47.916 }, 00:16:47.916 "auth": { 00:16:47.916 "state": "completed", 00:16:47.916 "digest": "sha256", 00:16:47.916 "dhgroup": "null" 00:16:47.916 } 00:16:47.916 } 00:16:47.916 ]' 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.916 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.480 11:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.411 11:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.669 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.233 00:16:50.233 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.233 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.233 11:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.798 { 00:16:50.798 "cntlid": 5, 00:16:50.798 "qid": 0, 00:16:50.798 "state": "enabled", 00:16:50.798 "thread": "nvmf_tgt_poll_group_000", 00:16:50.798 "listen_address": { 00:16:50.798 "trtype": "TCP", 00:16:50.798 "adrfam": "IPv4", 00:16:50.798 "traddr": "10.0.0.2", 00:16:50.798 "trsvcid": "4420" 00:16:50.798 }, 00:16:50.798 "peer_address": { 00:16:50.798 "trtype": "TCP", 00:16:50.798 "adrfam": "IPv4", 00:16:50.798 "traddr": "10.0.0.1", 00:16:50.798 "trsvcid": "54042" 00:16:50.798 }, 00:16:50.798 "auth": { 00:16:50.798 "state": "completed", 00:16:50.798 "digest": "sha256", 00:16:50.798 "dhgroup": "null" 00:16:50.798 } 00:16:50.798 } 00:16:50.798 ]' 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.798 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.056 11:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.426 11:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.426 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.426 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.426 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.990 00:16:52.990 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.990 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.990 11:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.564 { 00:16:53.564 "cntlid": 7, 00:16:53.564 "qid": 0, 00:16:53.564 "state": "enabled", 00:16:53.564 "thread": "nvmf_tgt_poll_group_000", 00:16:53.564 "listen_address": { 00:16:53.564 "trtype": "TCP", 00:16:53.564 "adrfam": "IPv4", 00:16:53.564 "traddr": "10.0.0.2", 00:16:53.564 "trsvcid": "4420" 00:16:53.564 }, 00:16:53.564 "peer_address": { 00:16:53.564 "trtype": "TCP", 00:16:53.564 "adrfam": "IPv4", 00:16:53.564 "traddr": "10.0.0.1", 00:16:53.564 "trsvcid": "54076" 00:16:53.564 }, 00:16:53.564 "auth": { 00:16:53.564 "state": "completed", 00:16:53.564 "digest": "sha256", 00:16:53.564 "dhgroup": "null" 00:16:53.564 } 00:16:53.564 } 00:16:53.564 ]' 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.564 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.127 11:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.055 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.311 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.568 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.568 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.568 11:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.131 00:16:56.131 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.131 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.131 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.388 { 00:16:56.388 "cntlid": 9, 00:16:56.388 "qid": 0, 00:16:56.388 "state": "enabled", 00:16:56.388 "thread": "nvmf_tgt_poll_group_000", 00:16:56.388 "listen_address": { 00:16:56.388 "trtype": "TCP", 00:16:56.388 "adrfam": "IPv4", 00:16:56.388 "traddr": "10.0.0.2", 00:16:56.388 "trsvcid": "4420" 00:16:56.388 }, 00:16:56.388 "peer_address": { 00:16:56.388 "trtype": "TCP", 00:16:56.388 "adrfam": "IPv4", 00:16:56.388 "traddr": "10.0.0.1", 00:16:56.388 "trsvcid": "47616" 00:16:56.388 }, 00:16:56.388 "auth": { 00:16:56.388 "state": "completed", 00:16:56.388 "digest": "sha256", 00:16:56.388 "dhgroup": "ffdhe2048" 00:16:56.388 } 00:16:56.388 } 00:16:56.388 ]' 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.388 11:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.388 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.388 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.645 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.645 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.645 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.902 11:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.272 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.529 11:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.093 00:16:59.093 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.093 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.093 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.350 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.350 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.350 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.350 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.351 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.351 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.351 { 00:16:59.351 "cntlid": 11, 00:16:59.351 "qid": 0, 00:16:59.351 "state": "enabled", 00:16:59.351 "thread": "nvmf_tgt_poll_group_000", 00:16:59.351 "listen_address": { 00:16:59.351 "trtype": "TCP", 00:16:59.351 "adrfam": "IPv4", 00:16:59.351 "traddr": "10.0.0.2", 00:16:59.351 "trsvcid": "4420" 00:16:59.351 }, 00:16:59.351 "peer_address": { 00:16:59.351 "trtype": "TCP", 00:16:59.351 "adrfam": "IPv4", 00:16:59.351 "traddr": "10.0.0.1", 00:16:59.351 "trsvcid": "47638" 00:16:59.351 }, 00:16:59.351 "auth": { 00:16:59.351 "state": "completed", 00:16:59.351 "digest": "sha256", 00:16:59.351 "dhgroup": "ffdhe2048" 00:16:59.351 } 00:16:59.351 } 00:16:59.351 ]' 00:16:59.351 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.351 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.351 11:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.608 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.608 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.608 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.608 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.608 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.866 11:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.238 11:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.510 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.511 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.511 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.800 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.800 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.800 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.365 00:17:02.365 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.365 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.365 11:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.622 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.622 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.622 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.622 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.622 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.623 { 00:17:02.623 "cntlid": 13, 00:17:02.623 "qid": 0, 00:17:02.623 "state": "enabled", 00:17:02.623 "thread": "nvmf_tgt_poll_group_000", 00:17:02.623 "listen_address": { 00:17:02.623 "trtype": "TCP", 00:17:02.623 "adrfam": "IPv4", 00:17:02.623 "traddr": "10.0.0.2", 00:17:02.623 "trsvcid": "4420" 00:17:02.623 }, 00:17:02.623 "peer_address": { 00:17:02.623 "trtype": "TCP", 00:17:02.623 "adrfam": "IPv4", 00:17:02.623 "traddr": "10.0.0.1", 00:17:02.623 "trsvcid": "47674" 00:17:02.623 }, 00:17:02.623 "auth": { 00:17:02.623 "state": "completed", 00:17:02.623 "digest": "sha256", 00:17:02.623 "dhgroup": "ffdhe2048" 00:17:02.623 } 00:17:02.623 } 00:17:02.623 ]' 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.623 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.187 11:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.559 11:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.816 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:04.816 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.816 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.816 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:04.816 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.817 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.817 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:04.817 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.817 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.074 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.074 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.074 11:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.639 00:17:05.639 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.639 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.639 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.897 { 00:17:05.897 "cntlid": 15, 00:17:05.897 "qid": 0, 00:17:05.897 "state": "enabled", 00:17:05.897 "thread": "nvmf_tgt_poll_group_000", 00:17:05.897 "listen_address": { 00:17:05.897 "trtype": "TCP", 00:17:05.897 "adrfam": "IPv4", 00:17:05.897 "traddr": "10.0.0.2", 00:17:05.897 "trsvcid": "4420" 00:17:05.897 }, 00:17:05.897 "peer_address": { 00:17:05.897 "trtype": "TCP", 00:17:05.897 "adrfam": "IPv4", 00:17:05.897 "traddr": "10.0.0.1", 00:17:05.897 "trsvcid": "51390" 00:17:05.897 }, 00:17:05.897 "auth": { 00:17:05.897 "state": "completed", 00:17:05.897 "digest": "sha256", 00:17:05.897 "dhgroup": "ffdhe2048" 00:17:05.897 } 00:17:05.897 } 00:17:05.897 ]' 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.897 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.154 11:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.528 11:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.785 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.348 00:17:08.348 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.348 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.348 11:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.912 { 00:17:08.912 "cntlid": 17, 00:17:08.912 "qid": 0, 00:17:08.912 "state": "enabled", 00:17:08.912 "thread": "nvmf_tgt_poll_group_000", 00:17:08.912 "listen_address": { 00:17:08.912 "trtype": "TCP", 00:17:08.912 "adrfam": "IPv4", 00:17:08.912 "traddr": "10.0.0.2", 00:17:08.912 "trsvcid": "4420" 00:17:08.912 }, 00:17:08.912 "peer_address": { 00:17:08.912 "trtype": "TCP", 00:17:08.912 "adrfam": "IPv4", 00:17:08.912 "traddr": "10.0.0.1", 00:17:08.912 "trsvcid": "51422" 00:17:08.912 }, 00:17:08.912 "auth": { 00:17:08.912 "state": "completed", 00:17:08.912 "digest": "sha256", 00:17:08.912 "dhgroup": "ffdhe3072" 00:17:08.912 } 00:17:08.912 } 00:17:08.912 ]' 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.912 11:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.477 11:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.847 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.104 11:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.361 00:17:11.361 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.361 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.361 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.924 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.924 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.924 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.924 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.924 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.924 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.924 { 00:17:11.924 "cntlid": 19, 00:17:11.924 "qid": 0, 00:17:11.924 "state": "enabled", 00:17:11.924 "thread": "nvmf_tgt_poll_group_000", 00:17:11.924 "listen_address": { 00:17:11.924 "trtype": "TCP", 00:17:11.924 "adrfam": "IPv4", 00:17:11.924 "traddr": "10.0.0.2", 00:17:11.924 "trsvcid": "4420" 00:17:11.924 }, 00:17:11.924 "peer_address": { 00:17:11.924 "trtype": "TCP", 00:17:11.924 "adrfam": "IPv4", 00:17:11.924 "traddr": "10.0.0.1", 00:17:11.924 "trsvcid": "51452" 00:17:11.924 }, 00:17:11.924 "auth": { 00:17:11.924 "state": "completed", 00:17:11.924 "digest": "sha256", 00:17:11.924 "dhgroup": "ffdhe3072" 00:17:11.924 } 00:17:11.924 } 00:17:11.924 ]' 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.231 11:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.795 11:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:17:13.727 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.984 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.549 11:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.806 00:17:14.807 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.807 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.807 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.064 { 00:17:15.064 "cntlid": 21, 00:17:15.064 "qid": 0, 00:17:15.064 "state": "enabled", 00:17:15.064 "thread": "nvmf_tgt_poll_group_000", 00:17:15.064 "listen_address": { 00:17:15.064 "trtype": "TCP", 00:17:15.064 "adrfam": "IPv4", 00:17:15.064 "traddr": "10.0.0.2", 00:17:15.064 "trsvcid": "4420" 00:17:15.064 }, 00:17:15.064 "peer_address": { 00:17:15.064 "trtype": "TCP", 00:17:15.064 "adrfam": "IPv4", 00:17:15.064 "traddr": "10.0.0.1", 00:17:15.064 "trsvcid": "39688" 00:17:15.064 }, 00:17:15.064 "auth": { 00:17:15.064 "state": "completed", 00:17:15.064 "digest": "sha256", 00:17:15.064 "dhgroup": "ffdhe3072" 00:17:15.064 } 00:17:15.064 } 00:17:15.064 ]' 00:17:15.064 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.321 11:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.579 11:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:17:16.984 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.984 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:16.984 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.985 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.985 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.985 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.985 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:16.985 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.242 11:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.805 00:17:17.805 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.806 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.806 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.062 { 00:17:18.062 "cntlid": 23, 00:17:18.062 "qid": 0, 00:17:18.062 "state": "enabled", 00:17:18.062 "thread": "nvmf_tgt_poll_group_000", 00:17:18.062 "listen_address": { 00:17:18.062 "trtype": "TCP", 00:17:18.062 "adrfam": "IPv4", 00:17:18.062 "traddr": "10.0.0.2", 00:17:18.062 "trsvcid": "4420" 00:17:18.062 }, 00:17:18.062 "peer_address": { 00:17:18.062 "trtype": "TCP", 00:17:18.062 "adrfam": "IPv4", 00:17:18.062 "traddr": "10.0.0.1", 00:17:18.062 "trsvcid": "39720" 00:17:18.062 }, 00:17:18.062 "auth": { 00:17:18.062 "state": "completed", 00:17:18.062 "digest": "sha256", 00:17:18.062 "dhgroup": "ffdhe3072" 00:17:18.062 } 00:17:18.062 } 00:17:18.062 ]' 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.062 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.319 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.319 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.319 11:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.576 11:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:19.947 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.204 11:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.461 00:17:20.461 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.461 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.461 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.392 { 00:17:21.392 "cntlid": 25, 00:17:21.392 "qid": 0, 00:17:21.392 "state": "enabled", 00:17:21.392 "thread": "nvmf_tgt_poll_group_000", 00:17:21.392 "listen_address": { 00:17:21.392 "trtype": "TCP", 00:17:21.392 "adrfam": "IPv4", 00:17:21.392 "traddr": "10.0.0.2", 00:17:21.392 "trsvcid": "4420" 00:17:21.392 }, 00:17:21.392 "peer_address": { 00:17:21.392 "trtype": "TCP", 00:17:21.392 "adrfam": "IPv4", 00:17:21.392 "traddr": "10.0.0.1", 00:17:21.392 "trsvcid": "39752" 00:17:21.392 }, 00:17:21.392 "auth": { 00:17:21.392 "state": "completed", 00:17:21.392 "digest": "sha256", 00:17:21.392 "dhgroup": "ffdhe4096" 00:17:21.392 } 00:17:21.392 } 00:17:21.392 ]' 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.392 11:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.956 11:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:17:22.888 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.888 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:22.888 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.888 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.145 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.145 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.145 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.145 11:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.709 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.710 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.967 00:17:23.967 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.967 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.967 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.532 { 00:17:24.532 "cntlid": 27, 00:17:24.532 "qid": 0, 00:17:24.532 "state": "enabled", 00:17:24.532 "thread": "nvmf_tgt_poll_group_000", 00:17:24.532 "listen_address": { 00:17:24.532 "trtype": "TCP", 00:17:24.532 "adrfam": "IPv4", 00:17:24.532 "traddr": "10.0.0.2", 00:17:24.532 "trsvcid": "4420" 00:17:24.532 }, 00:17:24.532 "peer_address": { 00:17:24.532 "trtype": "TCP", 00:17:24.532 "adrfam": "IPv4", 00:17:24.532 "traddr": "10.0.0.1", 00:17:24.532 "trsvcid": "33352" 00:17:24.532 }, 00:17:24.532 "auth": { 00:17:24.532 "state": "completed", 00:17:24.532 "digest": "sha256", 00:17:24.532 "dhgroup": "ffdhe4096" 00:17:24.532 } 00:17:24.532 } 00:17:24.532 ]' 00:17:24.532 11:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.532 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.789 11:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:26.160 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.418 11:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.418 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.418 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.418 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.418 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.351 00:17:27.351 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.351 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.351 11:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.608 { 00:17:27.608 "cntlid": 29, 00:17:27.608 "qid": 0, 00:17:27.608 "state": "enabled", 00:17:27.608 "thread": "nvmf_tgt_poll_group_000", 00:17:27.608 "listen_address": { 00:17:27.608 "trtype": "TCP", 00:17:27.608 "adrfam": "IPv4", 00:17:27.608 "traddr": "10.0.0.2", 00:17:27.608 "trsvcid": "4420" 00:17:27.608 }, 00:17:27.608 "peer_address": { 00:17:27.608 "trtype": "TCP", 00:17:27.608 "adrfam": "IPv4", 00:17:27.608 "traddr": "10.0.0.1", 00:17:27.608 "trsvcid": "33376" 00:17:27.608 }, 00:17:27.608 "auth": { 00:17:27.608 "state": "completed", 00:17:27.608 "digest": "sha256", 00:17:27.608 "dhgroup": "ffdhe4096" 00:17:27.608 } 00:17:27.608 } 00:17:27.608 ]' 00:17:27.608 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.865 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.123 11:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:17:29.054 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.312 11:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.876 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.132 00:17:30.132 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.132 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.132 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.394 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.394 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.394 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.394 11:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.394 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.394 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.394 { 00:17:30.394 "cntlid": 31, 00:17:30.394 "qid": 0, 00:17:30.394 "state": "enabled", 00:17:30.394 "thread": "nvmf_tgt_poll_group_000", 00:17:30.394 "listen_address": { 00:17:30.394 "trtype": "TCP", 00:17:30.394 "adrfam": "IPv4", 00:17:30.394 "traddr": "10.0.0.2", 00:17:30.394 "trsvcid": "4420" 00:17:30.394 }, 00:17:30.394 "peer_address": { 00:17:30.394 "trtype": "TCP", 00:17:30.394 "adrfam": "IPv4", 00:17:30.394 "traddr": "10.0.0.1", 00:17:30.394 "trsvcid": "33400" 00:17:30.394 }, 00:17:30.394 "auth": { 00:17:30.394 "state": "completed", 00:17:30.394 "digest": "sha256", 00:17:30.394 "dhgroup": "ffdhe4096" 00:17:30.394 } 00:17:30.394 } 00:17:30.394 ]' 00:17:30.394 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.689 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.257 11:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.628 11:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.885 11:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.817 00:17:33.817 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.817 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.817 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.074 { 00:17:34.074 "cntlid": 33, 00:17:34.074 "qid": 0, 00:17:34.074 "state": "enabled", 00:17:34.074 "thread": "nvmf_tgt_poll_group_000", 00:17:34.074 "listen_address": { 00:17:34.074 "trtype": "TCP", 00:17:34.074 "adrfam": "IPv4", 00:17:34.074 "traddr": "10.0.0.2", 00:17:34.074 "trsvcid": "4420" 00:17:34.074 }, 00:17:34.074 "peer_address": { 00:17:34.074 "trtype": "TCP", 00:17:34.074 "adrfam": "IPv4", 00:17:34.074 "traddr": "10.0.0.1", 00:17:34.074 "trsvcid": "33414" 00:17:34.074 }, 00:17:34.074 "auth": { 00:17:34.074 "state": "completed", 00:17:34.074 "digest": "sha256", 00:17:34.074 "dhgroup": "ffdhe6144" 00:17:34.074 } 00:17:34.074 } 00:17:34.074 ]' 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.074 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.336 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.336 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.336 11:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.594 11:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.524 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:36.087 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:36.087 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.087 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.088 11:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.019 00:17:37.019 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.019 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.019 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.276 { 00:17:37.276 "cntlid": 35, 00:17:37.276 "qid": 0, 00:17:37.276 "state": "enabled", 00:17:37.276 "thread": "nvmf_tgt_poll_group_000", 00:17:37.276 "listen_address": { 00:17:37.276 "trtype": "TCP", 00:17:37.276 "adrfam": "IPv4", 00:17:37.276 "traddr": "10.0.0.2", 00:17:37.276 "trsvcid": "4420" 00:17:37.276 }, 00:17:37.276 "peer_address": { 00:17:37.276 "trtype": "TCP", 00:17:37.276 "adrfam": "IPv4", 00:17:37.276 "traddr": "10.0.0.1", 00:17:37.276 "trsvcid": "46590" 00:17:37.276 }, 00:17:37.276 "auth": { 00:17:37.276 "state": "completed", 00:17:37.276 "digest": "sha256", 00:17:37.276 "dhgroup": "ffdhe6144" 00:17:37.276 } 00:17:37.276 } 00:17:37.276 ]' 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.276 11:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.844 11:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.217 11:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.150 00:17:40.150 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.150 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.150 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.408 { 00:17:40.408 "cntlid": 37, 00:17:40.408 "qid": 0, 00:17:40.408 "state": "enabled", 00:17:40.408 "thread": "nvmf_tgt_poll_group_000", 00:17:40.408 "listen_address": { 00:17:40.408 "trtype": "TCP", 00:17:40.408 "adrfam": "IPv4", 00:17:40.408 "traddr": "10.0.0.2", 00:17:40.408 "trsvcid": "4420" 00:17:40.408 }, 00:17:40.408 "peer_address": { 00:17:40.408 "trtype": "TCP", 00:17:40.408 "adrfam": "IPv4", 00:17:40.408 "traddr": "10.0.0.1", 00:17:40.408 "trsvcid": "46602" 00:17:40.408 }, 00:17:40.408 "auth": { 00:17:40.408 "state": "completed", 00:17:40.408 "digest": "sha256", 00:17:40.408 "dhgroup": "ffdhe6144" 00:17:40.408 } 00:17:40.408 } 00:17:40.408 ]' 00:17:40.408 11:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.408 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.408 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.408 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.408 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.666 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.666 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.666 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.924 11:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.297 11:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.601 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.168 00:17:43.168 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.168 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.168 11:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.734 { 00:17:43.734 "cntlid": 39, 00:17:43.734 "qid": 0, 00:17:43.734 "state": "enabled", 00:17:43.734 "thread": "nvmf_tgt_poll_group_000", 00:17:43.734 "listen_address": { 00:17:43.734 "trtype": "TCP", 00:17:43.734 "adrfam": "IPv4", 00:17:43.734 "traddr": "10.0.0.2", 00:17:43.734 "trsvcid": "4420" 00:17:43.734 }, 00:17:43.734 "peer_address": { 00:17:43.734 "trtype": "TCP", 00:17:43.734 "adrfam": "IPv4", 00:17:43.734 "traddr": "10.0.0.1", 00:17:43.734 "trsvcid": "46638" 00:17:43.734 }, 00:17:43.734 "auth": { 00:17:43.734 "state": "completed", 00:17:43.734 "digest": "sha256", 00:17:43.734 "dhgroup": "ffdhe6144" 00:17:43.734 } 00:17:43.734 } 00:17:43.734 ]' 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.734 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.992 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.992 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.992 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.249 11:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:17:45.220 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.220 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:45.220 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.220 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.478 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.478 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.478 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.478 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.478 11:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.736 11:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.669 00:17:46.669 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.669 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.669 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.235 { 00:17:47.235 "cntlid": 41, 00:17:47.235 "qid": 0, 00:17:47.235 "state": "enabled", 00:17:47.235 "thread": "nvmf_tgt_poll_group_000", 00:17:47.235 "listen_address": { 00:17:47.235 "trtype": "TCP", 00:17:47.235 "adrfam": "IPv4", 00:17:47.235 "traddr": "10.0.0.2", 00:17:47.235 "trsvcid": "4420" 00:17:47.235 }, 00:17:47.235 "peer_address": { 00:17:47.235 "trtype": "TCP", 00:17:47.235 "adrfam": "IPv4", 00:17:47.235 "traddr": "10.0.0.1", 00:17:47.235 "trsvcid": "50294" 00:17:47.235 }, 00:17:47.235 "auth": { 00:17:47.235 "state": "completed", 00:17:47.235 "digest": "sha256", 00:17:47.235 "dhgroup": "ffdhe8192" 00:17:47.235 } 00:17:47.235 } 00:17:47.235 ]' 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.235 11:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.802 11:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.735 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.300 11:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.674 00:17:50.674 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.674 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.674 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.932 { 00:17:50.932 "cntlid": 43, 00:17:50.932 "qid": 0, 00:17:50.932 "state": "enabled", 00:17:50.932 "thread": "nvmf_tgt_poll_group_000", 00:17:50.932 "listen_address": { 00:17:50.932 "trtype": "TCP", 00:17:50.932 "adrfam": "IPv4", 00:17:50.932 "traddr": "10.0.0.2", 00:17:50.932 "trsvcid": "4420" 00:17:50.932 }, 00:17:50.932 "peer_address": { 00:17:50.932 "trtype": "TCP", 00:17:50.932 "adrfam": "IPv4", 00:17:50.932 "traddr": "10.0.0.1", 00:17:50.932 "trsvcid": "50310" 00:17:50.932 }, 00:17:50.932 "auth": { 00:17:50.932 "state": "completed", 00:17:50.932 "digest": "sha256", 00:17:50.932 "dhgroup": "ffdhe8192" 00:17:50.932 } 00:17:50.932 } 00:17:50.932 ]' 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.932 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.189 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.189 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.189 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.447 11:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.379 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.944 11:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.315 00:17:54.315 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.315 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.315 11:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.572 { 00:17:54.572 "cntlid": 45, 00:17:54.572 "qid": 0, 00:17:54.572 "state": "enabled", 00:17:54.572 "thread": "nvmf_tgt_poll_group_000", 00:17:54.572 "listen_address": { 00:17:54.572 "trtype": "TCP", 00:17:54.572 "adrfam": "IPv4", 00:17:54.572 "traddr": "10.0.0.2", 00:17:54.572 "trsvcid": "4420" 00:17:54.572 }, 00:17:54.572 "peer_address": { 00:17:54.572 "trtype": "TCP", 00:17:54.572 "adrfam": "IPv4", 00:17:54.572 "traddr": "10.0.0.1", 00:17:54.572 "trsvcid": "37088" 00:17:54.572 }, 00:17:54.572 "auth": { 00:17:54.572 "state": "completed", 00:17:54.572 "digest": "sha256", 00:17:54.572 "dhgroup": "ffdhe8192" 00:17:54.572 } 00:17:54.572 } 00:17:54.572 ]' 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.572 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.829 11:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.199 11:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.456 11:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.387 00:17:57.387 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.387 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.387 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.949 { 00:17:57.949 "cntlid": 47, 00:17:57.949 "qid": 0, 00:17:57.949 "state": "enabled", 00:17:57.949 "thread": "nvmf_tgt_poll_group_000", 00:17:57.949 "listen_address": { 00:17:57.949 "trtype": "TCP", 00:17:57.949 "adrfam": "IPv4", 00:17:57.949 "traddr": "10.0.0.2", 00:17:57.949 "trsvcid": "4420" 00:17:57.949 }, 00:17:57.949 "peer_address": { 00:17:57.949 "trtype": "TCP", 00:17:57.949 "adrfam": "IPv4", 00:17:57.949 "traddr": "10.0.0.1", 00:17:57.949 "trsvcid": "37102" 00:17:57.949 }, 00:17:57.949 "auth": { 00:17:57.949 "state": "completed", 00:17:57.949 "digest": "sha256", 00:17:57.949 "dhgroup": "ffdhe8192" 00:17:57.949 } 00:17:57.949 } 00:17:57.949 ]' 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.949 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.206 11:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.577 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.835 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.099 00:18:00.406 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.406 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.406 11:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.664 { 00:18:00.664 "cntlid": 49, 00:18:00.664 "qid": 0, 00:18:00.664 "state": "enabled", 00:18:00.664 "thread": "nvmf_tgt_poll_group_000", 00:18:00.664 "listen_address": { 00:18:00.664 "trtype": "TCP", 00:18:00.664 "adrfam": "IPv4", 00:18:00.664 "traddr": "10.0.0.2", 00:18:00.664 "trsvcid": "4420" 00:18:00.664 }, 00:18:00.664 "peer_address": { 00:18:00.664 "trtype": "TCP", 00:18:00.664 "adrfam": "IPv4", 00:18:00.664 "traddr": "10.0.0.1", 00:18:00.664 "trsvcid": "37130" 00:18:00.664 }, 00:18:00.664 "auth": { 00:18:00.664 "state": "completed", 00:18:00.664 "digest": "sha384", 00:18:00.664 "dhgroup": "null" 00:18:00.664 } 00:18:00.664 } 00:18:00.664 ]' 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.664 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.921 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.921 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.921 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.921 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.921 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.179 11:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.572 11:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.830 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.089 00:18:03.089 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.089 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.089 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.346 { 00:18:03.346 "cntlid": 51, 00:18:03.346 "qid": 0, 00:18:03.346 "state": "enabled", 00:18:03.346 "thread": "nvmf_tgt_poll_group_000", 00:18:03.346 "listen_address": { 00:18:03.346 "trtype": "TCP", 00:18:03.346 "adrfam": "IPv4", 00:18:03.346 "traddr": "10.0.0.2", 00:18:03.346 "trsvcid": "4420" 00:18:03.346 }, 00:18:03.346 "peer_address": { 00:18:03.346 "trtype": "TCP", 00:18:03.346 "adrfam": "IPv4", 00:18:03.346 "traddr": "10.0.0.1", 00:18:03.346 "trsvcid": "37168" 00:18:03.346 }, 00:18:03.346 "auth": { 00:18:03.346 "state": "completed", 00:18:03.346 "digest": "sha384", 00:18:03.346 "dhgroup": "null" 00:18:03.346 } 00:18:03.346 } 00:18:03.346 ]' 00:18:03.346 11:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.604 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.169 11:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:18:05.100 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.100 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:05.101 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.101 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.101 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.101 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.101 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.101 11:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.665 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.922 00:18:05.922 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.922 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.922 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.488 { 00:18:06.488 "cntlid": 53, 00:18:06.488 "qid": 0, 00:18:06.488 "state": "enabled", 00:18:06.488 "thread": "nvmf_tgt_poll_group_000", 00:18:06.488 "listen_address": { 00:18:06.488 "trtype": "TCP", 00:18:06.488 "adrfam": "IPv4", 00:18:06.488 "traddr": "10.0.0.2", 00:18:06.488 "trsvcid": "4420" 00:18:06.488 }, 00:18:06.488 "peer_address": { 00:18:06.488 "trtype": "TCP", 00:18:06.488 "adrfam": "IPv4", 00:18:06.488 "traddr": "10.0.0.1", 00:18:06.488 "trsvcid": "50502" 00:18:06.488 }, 00:18:06.488 "auth": { 00:18:06.488 "state": "completed", 00:18:06.488 "digest": "sha384", 00:18:06.488 "dhgroup": "null" 00:18:06.488 } 00:18:06.488 } 00:18:06.488 ]' 00:18:06.488 11:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.488 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.488 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.488 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.488 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.488 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.488 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.489 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.053 11:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.424 11:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.682 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.246 00:18:09.246 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.246 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.246 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.504 { 00:18:09.504 "cntlid": 55, 00:18:09.504 "qid": 0, 00:18:09.504 "state": "enabled", 00:18:09.504 "thread": "nvmf_tgt_poll_group_000", 00:18:09.504 "listen_address": { 00:18:09.504 "trtype": "TCP", 00:18:09.504 "adrfam": "IPv4", 00:18:09.504 "traddr": "10.0.0.2", 00:18:09.504 "trsvcid": "4420" 00:18:09.504 }, 00:18:09.504 "peer_address": { 00:18:09.504 "trtype": "TCP", 00:18:09.504 "adrfam": "IPv4", 00:18:09.504 "traddr": "10.0.0.1", 00:18:09.504 "trsvcid": "50536" 00:18:09.504 }, 00:18:09.504 "auth": { 00:18:09.504 "state": "completed", 00:18:09.504 "digest": "sha384", 00:18:09.504 "dhgroup": "null" 00:18:09.504 } 00:18:09.504 } 00:18:09.504 ]' 00:18:09.504 11:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.504 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.069 11:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.001 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.259 11:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.824 00:18:11.824 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.824 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.824 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.388 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.388 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.389 { 00:18:12.389 "cntlid": 57, 00:18:12.389 "qid": 0, 00:18:12.389 "state": "enabled", 00:18:12.389 "thread": "nvmf_tgt_poll_group_000", 00:18:12.389 "listen_address": { 00:18:12.389 "trtype": "TCP", 00:18:12.389 "adrfam": "IPv4", 00:18:12.389 "traddr": "10.0.0.2", 00:18:12.389 "trsvcid": "4420" 00:18:12.389 }, 00:18:12.389 "peer_address": { 00:18:12.389 "trtype": "TCP", 00:18:12.389 "adrfam": "IPv4", 00:18:12.389 "traddr": "10.0.0.1", 00:18:12.389 "trsvcid": "50566" 00:18:12.389 }, 00:18:12.389 "auth": { 00:18:12.389 "state": "completed", 00:18:12.389 "digest": "sha384", 00:18:12.389 "dhgroup": "ffdhe2048" 00:18:12.389 } 00:18:12.389 } 00:18:12.389 ]' 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.389 11:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.954 11:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.885 11:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.455 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.069 00:18:15.069 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.069 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.069 11:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.634 { 00:18:15.634 "cntlid": 59, 00:18:15.634 "qid": 0, 00:18:15.634 "state": "enabled", 00:18:15.634 "thread": "nvmf_tgt_poll_group_000", 00:18:15.634 "listen_address": { 00:18:15.634 "trtype": "TCP", 00:18:15.634 "adrfam": "IPv4", 00:18:15.634 "traddr": "10.0.0.2", 00:18:15.634 "trsvcid": "4420" 00:18:15.634 }, 00:18:15.634 "peer_address": { 00:18:15.634 "trtype": "TCP", 00:18:15.634 "adrfam": "IPv4", 00:18:15.634 "traddr": "10.0.0.1", 00:18:15.634 "trsvcid": "35150" 00:18:15.634 }, 00:18:15.634 "auth": { 00:18:15.634 "state": "completed", 00:18:15.634 "digest": "sha384", 00:18:15.634 "dhgroup": "ffdhe2048" 00:18:15.634 } 00:18:15.634 } 00:18:15.634 ]' 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.634 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.200 11:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.133 11:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.699 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.263 00:18:18.263 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.263 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.263 11:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.521 { 00:18:18.521 "cntlid": 61, 00:18:18.521 "qid": 0, 00:18:18.521 "state": "enabled", 00:18:18.521 "thread": "nvmf_tgt_poll_group_000", 00:18:18.521 "listen_address": { 00:18:18.521 "trtype": "TCP", 00:18:18.521 "adrfam": "IPv4", 00:18:18.521 "traddr": "10.0.0.2", 00:18:18.521 "trsvcid": "4420" 00:18:18.521 }, 00:18:18.521 "peer_address": { 00:18:18.521 "trtype": "TCP", 00:18:18.521 "adrfam": "IPv4", 00:18:18.521 "traddr": "10.0.0.1", 00:18:18.521 "trsvcid": "35180" 00:18:18.521 }, 00:18:18.521 "auth": { 00:18:18.521 "state": "completed", 00:18:18.521 "digest": "sha384", 00:18:18.521 "dhgroup": "ffdhe2048" 00:18:18.521 } 00:18:18.521 } 00:18:18.521 ]' 00:18:18.521 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.778 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.036 11:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.409 11:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.667 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.231 00:18:21.489 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.489 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.489 11:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.781 { 00:18:21.781 "cntlid": 63, 00:18:21.781 "qid": 0, 00:18:21.781 "state": "enabled", 00:18:21.781 "thread": "nvmf_tgt_poll_group_000", 00:18:21.781 "listen_address": { 00:18:21.781 "trtype": "TCP", 00:18:21.781 "adrfam": "IPv4", 00:18:21.781 "traddr": "10.0.0.2", 00:18:21.781 "trsvcid": "4420" 00:18:21.781 }, 00:18:21.781 "peer_address": { 00:18:21.781 "trtype": "TCP", 00:18:21.781 "adrfam": "IPv4", 00:18:21.781 "traddr": "10.0.0.1", 00:18:21.781 "trsvcid": "35200" 00:18:21.781 }, 00:18:21.781 "auth": { 00:18:21.781 "state": "completed", 00:18:21.781 "digest": "sha384", 00:18:21.781 "dhgroup": "ffdhe2048" 00:18:21.781 } 00:18:21.781 } 00:18:21.781 ]' 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.781 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.346 11:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.719 11:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.977 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.542 00:18:24.542 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.542 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.542 11:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.108 { 00:18:25.108 "cntlid": 65, 00:18:25.108 "qid": 0, 00:18:25.108 "state": "enabled", 00:18:25.108 "thread": "nvmf_tgt_poll_group_000", 00:18:25.108 "listen_address": { 00:18:25.108 "trtype": "TCP", 00:18:25.108 "adrfam": "IPv4", 00:18:25.108 "traddr": "10.0.0.2", 00:18:25.108 "trsvcid": "4420" 00:18:25.108 }, 00:18:25.108 "peer_address": { 00:18:25.108 "trtype": "TCP", 00:18:25.108 "adrfam": "IPv4", 00:18:25.108 "traddr": "10.0.0.1", 00:18:25.108 "trsvcid": "60444" 00:18:25.108 }, 00:18:25.108 "auth": { 00:18:25.108 "state": "completed", 00:18:25.108 "digest": "sha384", 00:18:25.108 "dhgroup": "ffdhe3072" 00:18:25.108 } 00:18:25.108 } 00:18:25.108 ]' 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.108 11:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.672 11:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.044 11:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.609 00:18:27.609 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.609 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.609 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.173 { 00:18:28.173 "cntlid": 67, 00:18:28.173 "qid": 0, 00:18:28.173 "state": "enabled", 00:18:28.173 "thread": "nvmf_tgt_poll_group_000", 00:18:28.173 "listen_address": { 00:18:28.173 "trtype": "TCP", 00:18:28.173 "adrfam": "IPv4", 00:18:28.173 "traddr": "10.0.0.2", 00:18:28.173 "trsvcid": "4420" 00:18:28.173 }, 00:18:28.173 "peer_address": { 00:18:28.173 "trtype": "TCP", 00:18:28.173 "adrfam": "IPv4", 00:18:28.173 "traddr": "10.0.0.1", 00:18:28.173 "trsvcid": "60474" 00:18:28.173 }, 00:18:28.173 "auth": { 00:18:28.173 "state": "completed", 00:18:28.173 "digest": "sha384", 00:18:28.173 "dhgroup": "ffdhe3072" 00:18:28.173 } 00:18:28.173 } 00:18:28.173 ]' 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.173 11:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.738 11:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.176 11:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.108 00:18:31.108 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.108 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.108 11:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.366 { 00:18:31.366 "cntlid": 69, 00:18:31.366 "qid": 0, 00:18:31.366 "state": "enabled", 00:18:31.366 "thread": "nvmf_tgt_poll_group_000", 00:18:31.366 "listen_address": { 00:18:31.366 "trtype": "TCP", 00:18:31.366 "adrfam": "IPv4", 00:18:31.366 "traddr": "10.0.0.2", 00:18:31.366 "trsvcid": "4420" 00:18:31.366 }, 00:18:31.366 "peer_address": { 00:18:31.366 "trtype": "TCP", 00:18:31.366 "adrfam": "IPv4", 00:18:31.366 "traddr": "10.0.0.1", 00:18:31.366 "trsvcid": "60504" 00:18:31.366 }, 00:18:31.366 "auth": { 00:18:31.366 "state": "completed", 00:18:31.366 "digest": "sha384", 00:18:31.366 "dhgroup": "ffdhe3072" 00:18:31.366 } 00:18:31.366 } 00:18:31.366 ]' 00:18:31.366 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.623 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.187 11:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:18:33.120 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.120 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:33.120 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.120 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.377 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.377 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.377 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.377 11:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.636 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.893 00:18:33.893 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.893 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.893 11:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.458 { 00:18:34.458 "cntlid": 71, 00:18:34.458 "qid": 0, 00:18:34.458 "state": "enabled", 00:18:34.458 "thread": "nvmf_tgt_poll_group_000", 00:18:34.458 "listen_address": { 00:18:34.458 "trtype": "TCP", 00:18:34.458 "adrfam": "IPv4", 00:18:34.458 "traddr": "10.0.0.2", 00:18:34.458 "trsvcid": "4420" 00:18:34.458 }, 00:18:34.458 "peer_address": { 00:18:34.458 "trtype": "TCP", 00:18:34.458 "adrfam": "IPv4", 00:18:34.458 "traddr": "10.0.0.1", 00:18:34.458 "trsvcid": "37804" 00:18:34.458 }, 00:18:34.458 "auth": { 00:18:34.458 "state": "completed", 00:18:34.458 "digest": "sha384", 00:18:34.458 "dhgroup": "ffdhe3072" 00:18:34.458 } 00:18:34.458 } 00:18:34.458 ]' 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.458 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.715 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.715 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.715 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.715 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.715 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.280 11:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.650 11:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.650 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.216 00:18:37.216 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.216 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.216 11:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.781 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.781 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.782 { 00:18:37.782 "cntlid": 73, 00:18:37.782 "qid": 0, 00:18:37.782 "state": "enabled", 00:18:37.782 "thread": "nvmf_tgt_poll_group_000", 00:18:37.782 "listen_address": { 00:18:37.782 "trtype": "TCP", 00:18:37.782 "adrfam": "IPv4", 00:18:37.782 "traddr": "10.0.0.2", 00:18:37.782 "trsvcid": "4420" 00:18:37.782 }, 00:18:37.782 "peer_address": { 00:18:37.782 "trtype": "TCP", 00:18:37.782 "adrfam": "IPv4", 00:18:37.782 "traddr": "10.0.0.1", 00:18:37.782 "trsvcid": "37838" 00:18:37.782 }, 00:18:37.782 "auth": { 00:18:37.782 "state": "completed", 00:18:37.782 "digest": "sha384", 00:18:37.782 "dhgroup": "ffdhe4096" 00:18:37.782 } 00:18:37.782 } 00:18:37.782 ]' 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.782 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.039 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.039 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.039 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.295 11:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:18:39.666 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.666 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:39.667 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.667 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.667 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.667 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.667 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.667 11:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.667 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.232 00:18:40.232 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.232 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.232 11:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.488 { 00:18:40.488 "cntlid": 75, 00:18:40.488 "qid": 0, 00:18:40.488 "state": "enabled", 00:18:40.488 "thread": "nvmf_tgt_poll_group_000", 00:18:40.488 "listen_address": { 00:18:40.488 "trtype": "TCP", 00:18:40.488 "adrfam": "IPv4", 00:18:40.488 "traddr": "10.0.0.2", 00:18:40.488 "trsvcid": "4420" 00:18:40.488 }, 00:18:40.488 "peer_address": { 00:18:40.488 "trtype": "TCP", 00:18:40.488 "adrfam": "IPv4", 00:18:40.488 "traddr": "10.0.0.1", 00:18:40.488 "trsvcid": "37864" 00:18:40.488 }, 00:18:40.488 "auth": { 00:18:40.488 "state": "completed", 00:18:40.488 "digest": "sha384", 00:18:40.488 "dhgroup": "ffdhe4096" 00:18:40.488 } 00:18:40.488 } 00:18:40.488 ]' 00:18:40.488 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.744 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.001 11:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.377 11:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.941 00:18:42.941 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.941 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.942 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.506 { 00:18:43.506 "cntlid": 77, 00:18:43.506 "qid": 0, 00:18:43.506 "state": "enabled", 00:18:43.506 "thread": "nvmf_tgt_poll_group_000", 00:18:43.506 "listen_address": { 00:18:43.506 "trtype": "TCP", 00:18:43.506 "adrfam": "IPv4", 00:18:43.506 "traddr": "10.0.0.2", 00:18:43.506 "trsvcid": "4420" 00:18:43.506 }, 00:18:43.506 "peer_address": { 00:18:43.506 "trtype": "TCP", 00:18:43.506 "adrfam": "IPv4", 00:18:43.506 "traddr": "10.0.0.1", 00:18:43.506 "trsvcid": "37886" 00:18:43.506 }, 00:18:43.506 "auth": { 00:18:43.506 "state": "completed", 00:18:43.506 "digest": "sha384", 00:18:43.506 "dhgroup": "ffdhe4096" 00:18:43.506 } 00:18:43.506 } 00:18:43.506 ]' 00:18:43.506 11:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.506 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.506 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.506 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.506 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.764 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.764 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.764 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.023 11:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.392 11:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.649 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.906 00:18:46.163 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.163 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.163 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.419 { 00:18:46.419 "cntlid": 79, 00:18:46.419 "qid": 0, 00:18:46.419 "state": "enabled", 00:18:46.419 "thread": "nvmf_tgt_poll_group_000", 00:18:46.419 "listen_address": { 00:18:46.419 "trtype": "TCP", 00:18:46.419 "adrfam": "IPv4", 00:18:46.419 "traddr": "10.0.0.2", 00:18:46.419 "trsvcid": "4420" 00:18:46.419 }, 00:18:46.419 "peer_address": { 00:18:46.419 "trtype": "TCP", 00:18:46.419 "adrfam": "IPv4", 00:18:46.419 "traddr": "10.0.0.1", 00:18:46.419 "trsvcid": "48934" 00:18:46.419 }, 00:18:46.419 "auth": { 00:18:46.419 "state": "completed", 00:18:46.419 "digest": "sha384", 00:18:46.419 "dhgroup": "ffdhe4096" 00:18:46.419 } 00:18:46.419 } 00:18:46.419 ]' 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.419 11:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.419 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.419 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.419 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.985 11:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.356 11:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.290 00:18:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.290 11:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.548 { 00:18:49.548 "cntlid": 81, 00:18:49.548 "qid": 0, 00:18:49.548 "state": "enabled", 00:18:49.548 "thread": "nvmf_tgt_poll_group_000", 00:18:49.548 "listen_address": { 00:18:49.548 "trtype": "TCP", 00:18:49.548 "adrfam": "IPv4", 00:18:49.548 "traddr": "10.0.0.2", 00:18:49.548 "trsvcid": "4420" 00:18:49.548 }, 00:18:49.548 "peer_address": { 00:18:49.548 "trtype": "TCP", 00:18:49.548 "adrfam": "IPv4", 00:18:49.548 "traddr": "10.0.0.1", 00:18:49.548 "trsvcid": "48968" 00:18:49.548 }, 00:18:49.548 "auth": { 00:18:49.548 "state": "completed", 00:18:49.548 "digest": "sha384", 00:18:49.548 "dhgroup": "ffdhe6144" 00:18:49.548 } 00:18:49.548 } 00:18:49.548 ]' 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.548 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.806 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.806 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.806 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.806 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.806 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.372 11:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.305 11:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.562 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.494 00:18:52.494 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.494 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.494 11:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.752 { 00:18:52.752 "cntlid": 83, 00:18:52.752 "qid": 0, 00:18:52.752 "state": "enabled", 00:18:52.752 "thread": "nvmf_tgt_poll_group_000", 00:18:52.752 "listen_address": { 00:18:52.752 "trtype": "TCP", 00:18:52.752 "adrfam": "IPv4", 00:18:52.752 "traddr": "10.0.0.2", 00:18:52.752 "trsvcid": "4420" 00:18:52.752 }, 00:18:52.752 "peer_address": { 00:18:52.752 "trtype": "TCP", 00:18:52.752 "adrfam": "IPv4", 00:18:52.752 "traddr": "10.0.0.1", 00:18:52.752 "trsvcid": "48990" 00:18:52.752 }, 00:18:52.752 "auth": { 00:18:52.752 "state": "completed", 00:18:52.752 "digest": "sha384", 00:18:52.752 "dhgroup": "ffdhe6144" 00:18:52.752 } 00:18:52.752 } 00:18:52.752 ]' 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.752 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.317 11:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.249 11:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.815 11:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.380 00:18:55.380 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.380 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.380 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.946 { 00:18:55.946 "cntlid": 85, 00:18:55.946 "qid": 0, 00:18:55.946 "state": "enabled", 00:18:55.946 "thread": "nvmf_tgt_poll_group_000", 00:18:55.946 "listen_address": { 00:18:55.946 "trtype": "TCP", 00:18:55.946 "adrfam": "IPv4", 00:18:55.946 "traddr": "10.0.0.2", 00:18:55.946 "trsvcid": "4420" 00:18:55.946 }, 00:18:55.946 "peer_address": { 00:18:55.946 "trtype": "TCP", 00:18:55.946 "adrfam": "IPv4", 00:18:55.946 "traddr": "10.0.0.1", 00:18:55.946 "trsvcid": "33256" 00:18:55.946 }, 00:18:55.946 "auth": { 00:18:55.946 "state": "completed", 00:18:55.946 "digest": "sha384", 00:18:55.946 "dhgroup": "ffdhe6144" 00:18:55.946 } 00:18:55.946 } 00:18:55.946 ]' 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.946 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.204 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.204 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.204 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.204 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.204 11:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.462 11:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.832 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.091 11:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.690 00:18:58.690 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.690 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.690 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.948 { 00:18:58.948 "cntlid": 87, 00:18:58.948 "qid": 0, 00:18:58.948 "state": "enabled", 00:18:58.948 "thread": "nvmf_tgt_poll_group_000", 00:18:58.948 "listen_address": { 00:18:58.948 "trtype": "TCP", 00:18:58.948 "adrfam": "IPv4", 00:18:58.948 "traddr": "10.0.0.2", 00:18:58.948 "trsvcid": "4420" 00:18:58.948 }, 00:18:58.948 "peer_address": { 00:18:58.948 "trtype": "TCP", 00:18:58.948 "adrfam": "IPv4", 00:18:58.948 "traddr": "10.0.0.1", 00:18:58.948 "trsvcid": "33300" 00:18:58.948 }, 00:18:58.948 "auth": { 00:18:58.948 "state": "completed", 00:18:58.948 "digest": "sha384", 00:18:58.948 "dhgroup": "ffdhe6144" 00:18:58.948 } 00:18:58.948 } 00:18:58.948 ]' 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.948 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.206 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.463 11:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:19:00.396 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.396 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:00.396 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.396 11:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.396 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.396 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.396 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.396 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.396 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.962 11:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.932 00:19:01.932 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.932 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.932 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.188 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.188 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.188 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.188 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.188 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.188 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.188 { 00:19:02.188 "cntlid": 89, 00:19:02.188 "qid": 0, 00:19:02.188 "state": "enabled", 00:19:02.188 "thread": "nvmf_tgt_poll_group_000", 00:19:02.188 "listen_address": { 00:19:02.188 "trtype": "TCP", 00:19:02.188 "adrfam": "IPv4", 00:19:02.188 "traddr": "10.0.0.2", 00:19:02.188 "trsvcid": "4420" 00:19:02.188 }, 00:19:02.188 "peer_address": { 00:19:02.188 "trtype": "TCP", 00:19:02.188 "adrfam": "IPv4", 00:19:02.188 "traddr": "10.0.0.1", 00:19:02.188 "trsvcid": "33326" 00:19:02.188 }, 00:19:02.188 "auth": { 00:19:02.188 "state": "completed", 00:19:02.188 "digest": "sha384", 00:19:02.188 "dhgroup": "ffdhe8192" 00:19:02.188 } 00:19:02.188 } 00:19:02.188 ]' 00:19:02.189 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.445 11:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.703 11:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.076 11:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.446 00:19:05.446 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.446 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.446 11:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.703 { 00:19:05.703 "cntlid": 91, 00:19:05.703 "qid": 0, 00:19:05.703 "state": "enabled", 00:19:05.703 "thread": "nvmf_tgt_poll_group_000", 00:19:05.703 "listen_address": { 00:19:05.703 "trtype": "TCP", 00:19:05.703 "adrfam": "IPv4", 00:19:05.703 "traddr": "10.0.0.2", 00:19:05.703 "trsvcid": "4420" 00:19:05.703 }, 00:19:05.703 "peer_address": { 00:19:05.703 "trtype": "TCP", 00:19:05.703 "adrfam": "IPv4", 00:19:05.703 "traddr": "10.0.0.1", 00:19:05.703 "trsvcid": "32880" 00:19:05.703 }, 00:19:05.703 "auth": { 00:19:05.703 "state": "completed", 00:19:05.703 "digest": "sha384", 00:19:05.703 "dhgroup": "ffdhe8192" 00:19:05.703 } 00:19:05.703 } 00:19:05.703 ]' 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.703 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.267 11:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.199 11:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.457 11:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.827 00:19:08.827 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.827 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.827 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.086 { 00:19:09.086 "cntlid": 93, 00:19:09.086 "qid": 0, 00:19:09.086 "state": "enabled", 00:19:09.086 "thread": "nvmf_tgt_poll_group_000", 00:19:09.086 "listen_address": { 00:19:09.086 "trtype": "TCP", 00:19:09.086 "adrfam": "IPv4", 00:19:09.086 "traddr": "10.0.0.2", 00:19:09.086 "trsvcid": "4420" 00:19:09.086 }, 00:19:09.086 "peer_address": { 00:19:09.086 "trtype": "TCP", 00:19:09.086 "adrfam": "IPv4", 00:19:09.086 "traddr": "10.0.0.1", 00:19:09.086 "trsvcid": "32916" 00:19:09.086 }, 00:19:09.086 "auth": { 00:19:09.086 "state": "completed", 00:19:09.086 "digest": "sha384", 00:19:09.086 "dhgroup": "ffdhe8192" 00:19:09.086 } 00:19:09.086 } 00:19:09.086 ]' 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.086 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.344 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:09.344 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.344 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.344 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.344 11:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.601 11:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:19:10.535 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.793 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.050 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.051 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.051 11:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.983 00:19:11.983 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.983 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.983 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.241 { 00:19:12.241 "cntlid": 95, 00:19:12.241 "qid": 0, 00:19:12.241 "state": "enabled", 00:19:12.241 "thread": "nvmf_tgt_poll_group_000", 00:19:12.241 "listen_address": { 00:19:12.241 "trtype": "TCP", 00:19:12.241 "adrfam": "IPv4", 00:19:12.241 "traddr": "10.0.0.2", 00:19:12.241 "trsvcid": "4420" 00:19:12.241 }, 00:19:12.241 "peer_address": { 00:19:12.241 "trtype": "TCP", 00:19:12.241 "adrfam": "IPv4", 00:19:12.241 "traddr": "10.0.0.1", 00:19:12.241 "trsvcid": "32924" 00:19:12.241 }, 00:19:12.241 "auth": { 00:19:12.241 "state": "completed", 00:19:12.241 "digest": "sha384", 00:19:12.241 "dhgroup": "ffdhe8192" 00:19:12.241 } 00:19:12.241 } 00:19:12.241 ]' 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.241 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.499 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.499 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.499 11:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.774 11:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.178 11:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.436 00:19:14.693 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.693 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.693 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.951 { 00:19:14.951 "cntlid": 97, 00:19:14.951 "qid": 0, 00:19:14.951 "state": "enabled", 00:19:14.951 "thread": "nvmf_tgt_poll_group_000", 00:19:14.951 "listen_address": { 00:19:14.951 "trtype": "TCP", 00:19:14.951 "adrfam": "IPv4", 00:19:14.951 "traddr": "10.0.0.2", 00:19:14.951 "trsvcid": "4420" 00:19:14.951 }, 00:19:14.951 "peer_address": { 00:19:14.951 "trtype": "TCP", 00:19:14.951 "adrfam": "IPv4", 00:19:14.951 "traddr": "10.0.0.1", 00:19:14.951 "trsvcid": "44564" 00:19:14.951 }, 00:19:14.951 "auth": { 00:19:14.951 "state": "completed", 00:19:14.951 "digest": "sha512", 00:19:14.951 "dhgroup": "null" 00:19:14.951 } 00:19:14.951 } 00:19:14.951 ]' 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.951 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.952 11:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.517 11:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:16.890 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:17.147 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.148 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.406 00:19:17.406 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.406 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.406 11:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.663 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.663 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.663 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.663 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.663 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.663 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.663 { 00:19:17.663 "cntlid": 99, 00:19:17.663 "qid": 0, 00:19:17.663 "state": "enabled", 00:19:17.663 "thread": "nvmf_tgt_poll_group_000", 00:19:17.663 "listen_address": { 00:19:17.663 "trtype": "TCP", 00:19:17.663 "adrfam": "IPv4", 00:19:17.663 "traddr": "10.0.0.2", 00:19:17.663 "trsvcid": "4420" 00:19:17.663 }, 00:19:17.664 "peer_address": { 00:19:17.664 "trtype": "TCP", 00:19:17.664 "adrfam": "IPv4", 00:19:17.664 "traddr": "10.0.0.1", 00:19:17.664 "trsvcid": "44598" 00:19:17.664 }, 00:19:17.664 "auth": { 00:19:17.664 "state": "completed", 00:19:17.664 "digest": "sha512", 00:19:17.664 "dhgroup": "null" 00:19:17.664 } 00:19:17.664 } 00:19:17.664 ]' 00:19:17.664 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.920 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.177 11:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.549 11:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.549 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.807 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.807 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.807 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.065 00:19:20.065 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.065 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.065 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.322 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.322 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.322 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.322 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.322 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.322 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.322 { 00:19:20.322 "cntlid": 101, 00:19:20.322 "qid": 0, 00:19:20.322 "state": "enabled", 00:19:20.322 "thread": "nvmf_tgt_poll_group_000", 00:19:20.322 "listen_address": { 00:19:20.322 "trtype": "TCP", 00:19:20.322 "adrfam": "IPv4", 00:19:20.322 "traddr": "10.0.0.2", 00:19:20.322 "trsvcid": "4420" 00:19:20.322 }, 00:19:20.322 "peer_address": { 00:19:20.322 "trtype": "TCP", 00:19:20.322 "adrfam": "IPv4", 00:19:20.323 "traddr": "10.0.0.1", 00:19:20.323 "trsvcid": "44622" 00:19:20.323 }, 00:19:20.323 "auth": { 00:19:20.323 "state": "completed", 00:19:20.323 "digest": "sha512", 00:19:20.323 "dhgroup": "null" 00:19:20.323 } 00:19:20.323 } 00:19:20.323 ]' 00:19:20.323 11:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.581 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.838 11:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.212 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.469 11:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.035 00:19:23.035 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.035 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.035 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.293 { 00:19:23.293 "cntlid": 103, 00:19:23.293 "qid": 0, 00:19:23.293 "state": "enabled", 00:19:23.293 "thread": "nvmf_tgt_poll_group_000", 00:19:23.293 "listen_address": { 00:19:23.293 "trtype": "TCP", 00:19:23.293 "adrfam": "IPv4", 00:19:23.293 "traddr": "10.0.0.2", 00:19:23.293 "trsvcid": "4420" 00:19:23.293 }, 00:19:23.293 "peer_address": { 00:19:23.293 "trtype": "TCP", 00:19:23.293 "adrfam": "IPv4", 00:19:23.293 "traddr": "10.0.0.1", 00:19:23.293 "trsvcid": "44642" 00:19:23.293 }, 00:19:23.293 "auth": { 00:19:23.293 "state": "completed", 00:19:23.293 "digest": "sha512", 00:19:23.293 "dhgroup": "null" 00:19:23.293 } 00:19:23.293 } 00:19:23.293 ]' 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.293 11:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.856 11:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.226 11:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.484 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.048 00:19:26.048 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.048 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.048 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.305 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.305 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.305 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.305 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.305 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.305 { 00:19:26.305 "cntlid": 105, 00:19:26.305 "qid": 0, 00:19:26.305 "state": "enabled", 00:19:26.305 "thread": "nvmf_tgt_poll_group_000", 00:19:26.305 "listen_address": { 00:19:26.305 "trtype": "TCP", 00:19:26.305 "adrfam": "IPv4", 00:19:26.305 "traddr": "10.0.0.2", 00:19:26.305 "trsvcid": "4420" 00:19:26.305 }, 00:19:26.305 "peer_address": { 00:19:26.305 "trtype": "TCP", 00:19:26.305 "adrfam": "IPv4", 00:19:26.306 "traddr": "10.0.0.1", 00:19:26.306 "trsvcid": "44492" 00:19:26.306 }, 00:19:26.306 "auth": { 00:19:26.306 "state": "completed", 00:19:26.306 "digest": "sha512", 00:19:26.306 "dhgroup": "ffdhe2048" 00:19:26.306 } 00:19:26.306 } 00:19:26.306 ]' 00:19:26.306 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.306 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.306 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.563 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.563 11:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.563 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.563 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.563 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.821 11:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.234 11:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.492 00:19:28.749 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.749 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.749 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.006 { 00:19:29.006 "cntlid": 107, 00:19:29.006 "qid": 0, 00:19:29.006 "state": "enabled", 00:19:29.006 "thread": "nvmf_tgt_poll_group_000", 00:19:29.006 "listen_address": { 00:19:29.006 "trtype": "TCP", 00:19:29.006 "adrfam": "IPv4", 00:19:29.006 "traddr": "10.0.0.2", 00:19:29.006 "trsvcid": "4420" 00:19:29.006 }, 00:19:29.006 "peer_address": { 00:19:29.006 "trtype": "TCP", 00:19:29.006 "adrfam": "IPv4", 00:19:29.006 "traddr": "10.0.0.1", 00:19:29.006 "trsvcid": "44526" 00:19:29.006 }, 00:19:29.006 "auth": { 00:19:29.006 "state": "completed", 00:19:29.006 "digest": "sha512", 00:19:29.006 "dhgroup": "ffdhe2048" 00:19:29.006 } 00:19:29.006 } 00:19:29.006 ]' 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.006 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.263 11:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.635 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.893 11:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.459 00:19:31.717 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.717 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.717 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.976 { 00:19:31.976 "cntlid": 109, 00:19:31.976 "qid": 0, 00:19:31.976 "state": "enabled", 00:19:31.976 "thread": "nvmf_tgt_poll_group_000", 00:19:31.976 "listen_address": { 00:19:31.976 "trtype": "TCP", 00:19:31.976 "adrfam": "IPv4", 00:19:31.976 "traddr": "10.0.0.2", 00:19:31.976 "trsvcid": "4420" 00:19:31.976 }, 00:19:31.976 "peer_address": { 00:19:31.976 "trtype": "TCP", 00:19:31.976 "adrfam": "IPv4", 00:19:31.976 "traddr": "10.0.0.1", 00:19:31.976 "trsvcid": "44552" 00:19:31.976 }, 00:19:31.976 "auth": { 00:19:31.976 "state": "completed", 00:19:31.976 "digest": "sha512", 00:19:31.976 "dhgroup": "ffdhe2048" 00:19:31.976 } 00:19:31.976 } 00:19:31.976 ]' 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.976 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.541 11:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:19:33.475 11:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.475 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.041 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.299 00:19:34.299 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.299 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.299 11:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.864 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.864 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.864 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.864 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.864 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.864 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.864 { 00:19:34.864 "cntlid": 111, 00:19:34.864 "qid": 0, 00:19:34.864 "state": "enabled", 00:19:34.864 "thread": "nvmf_tgt_poll_group_000", 00:19:34.864 "listen_address": { 00:19:34.864 "trtype": "TCP", 00:19:34.864 "adrfam": "IPv4", 00:19:34.864 "traddr": "10.0.0.2", 00:19:34.864 "trsvcid": "4420" 00:19:34.864 }, 00:19:34.864 "peer_address": { 00:19:34.864 "trtype": "TCP", 00:19:34.864 "adrfam": "IPv4", 00:19:34.864 "traddr": "10.0.0.1", 00:19:34.864 "trsvcid": "47748" 00:19:34.864 }, 00:19:34.865 "auth": { 00:19:34.865 "state": "completed", 00:19:34.865 "digest": "sha512", 00:19:34.865 "dhgroup": "ffdhe2048" 00:19:34.865 } 00:19:34.865 } 00:19:34.865 ]' 00:19:34.865 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.865 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.865 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.865 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.865 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.122 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.122 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.122 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.380 11:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.314 11:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.880 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.881 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.881 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.139 00:19:37.139 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.139 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.139 11:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.705 { 00:19:37.705 "cntlid": 113, 00:19:37.705 "qid": 0, 00:19:37.705 "state": "enabled", 00:19:37.705 "thread": "nvmf_tgt_poll_group_000", 00:19:37.705 "listen_address": { 00:19:37.705 "trtype": "TCP", 00:19:37.705 "adrfam": "IPv4", 00:19:37.705 "traddr": "10.0.0.2", 00:19:37.705 "trsvcid": "4420" 00:19:37.705 }, 00:19:37.705 "peer_address": { 00:19:37.705 "trtype": "TCP", 00:19:37.705 "adrfam": "IPv4", 00:19:37.705 "traddr": "10.0.0.1", 00:19:37.705 "trsvcid": "47768" 00:19:37.705 }, 00:19:37.705 "auth": { 00:19:37.705 "state": "completed", 00:19:37.705 "digest": "sha512", 00:19:37.705 "dhgroup": "ffdhe3072" 00:19:37.705 } 00:19:37.705 } 00:19:37.705 ]' 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.705 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.279 11:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.212 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.469 11:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.035 00:19:40.035 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.035 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.035 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.293 { 00:19:40.293 "cntlid": 115, 00:19:40.293 "qid": 0, 00:19:40.293 "state": "enabled", 00:19:40.293 "thread": "nvmf_tgt_poll_group_000", 00:19:40.293 "listen_address": { 00:19:40.293 "trtype": "TCP", 00:19:40.293 "adrfam": "IPv4", 00:19:40.293 "traddr": "10.0.0.2", 00:19:40.293 "trsvcid": "4420" 00:19:40.293 }, 00:19:40.293 "peer_address": { 00:19:40.293 "trtype": "TCP", 00:19:40.293 "adrfam": "IPv4", 00:19:40.293 "traddr": "10.0.0.1", 00:19:40.293 "trsvcid": "47792" 00:19:40.293 }, 00:19:40.293 "auth": { 00:19:40.293 "state": "completed", 00:19:40.293 "digest": "sha512", 00:19:40.293 "dhgroup": "ffdhe3072" 00:19:40.293 } 00:19:40.293 } 00:19:40.293 ]' 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.293 11:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.551 11:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.961 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.526 00:19:42.526 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.526 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.526 11:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.784 { 00:19:42.784 "cntlid": 117, 00:19:42.784 "qid": 0, 00:19:42.784 "state": "enabled", 00:19:42.784 "thread": "nvmf_tgt_poll_group_000", 00:19:42.784 "listen_address": { 00:19:42.784 "trtype": "TCP", 00:19:42.784 "adrfam": "IPv4", 00:19:42.784 "traddr": "10.0.0.2", 00:19:42.784 "trsvcid": "4420" 00:19:42.784 }, 00:19:42.784 "peer_address": { 00:19:42.784 "trtype": "TCP", 00:19:42.784 "adrfam": "IPv4", 00:19:42.784 "traddr": "10.0.0.1", 00:19:42.784 "trsvcid": "47816" 00:19:42.784 }, 00:19:42.784 "auth": { 00:19:42.784 "state": "completed", 00:19:42.784 "digest": "sha512", 00:19:42.784 "dhgroup": "ffdhe3072" 00:19:42.784 } 00:19:42.784 } 00:19:42.784 ]' 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.784 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.350 11:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:19:44.722 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.722 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:44.722 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.722 11:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.722 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.287 00:19:45.287 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.287 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.287 11:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.578 { 00:19:45.578 "cntlid": 119, 00:19:45.578 "qid": 0, 00:19:45.578 "state": "enabled", 00:19:45.578 "thread": "nvmf_tgt_poll_group_000", 00:19:45.578 "listen_address": { 00:19:45.578 "trtype": "TCP", 00:19:45.578 "adrfam": "IPv4", 00:19:45.578 "traddr": "10.0.0.2", 00:19:45.578 "trsvcid": "4420" 00:19:45.578 }, 00:19:45.578 "peer_address": { 00:19:45.578 "trtype": "TCP", 00:19:45.578 "adrfam": "IPv4", 00:19:45.578 "traddr": "10.0.0.1", 00:19:45.578 "trsvcid": "59448" 00:19:45.578 }, 00:19:45.578 "auth": { 00:19:45.578 "state": "completed", 00:19:45.578 "digest": "sha512", 00:19:45.578 "dhgroup": "ffdhe3072" 00:19:45.578 } 00:19:45.578 } 00:19:45.578 ]' 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.578 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.835 11:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:19:47.207 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.208 11:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.465 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.030 00:19:48.030 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.030 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.030 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.594 { 00:19:48.594 "cntlid": 121, 00:19:48.594 "qid": 0, 00:19:48.594 "state": "enabled", 00:19:48.594 "thread": "nvmf_tgt_poll_group_000", 00:19:48.594 "listen_address": { 00:19:48.594 "trtype": "TCP", 00:19:48.594 "adrfam": "IPv4", 00:19:48.594 "traddr": "10.0.0.2", 00:19:48.594 "trsvcid": "4420" 00:19:48.594 }, 00:19:48.594 "peer_address": { 00:19:48.594 "trtype": "TCP", 00:19:48.594 "adrfam": "IPv4", 00:19:48.594 "traddr": "10.0.0.1", 00:19:48.594 "trsvcid": "59478" 00:19:48.594 }, 00:19:48.594 "auth": { 00:19:48.594 "state": "completed", 00:19:48.594 "digest": "sha512", 00:19:48.594 "dhgroup": "ffdhe4096" 00:19:48.594 } 00:19:48.594 } 00:19:48.594 ]' 00:19:48.594 11:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.594 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.851 11:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:19:49.784 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.784 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:49.784 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.784 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.784 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.784 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.785 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.785 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.350 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.351 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.351 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.351 11:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.609 00:19:50.609 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.609 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.609 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.175 { 00:19:51.175 "cntlid": 123, 00:19:51.175 "qid": 0, 00:19:51.175 "state": "enabled", 00:19:51.175 "thread": "nvmf_tgt_poll_group_000", 00:19:51.175 "listen_address": { 00:19:51.175 "trtype": "TCP", 00:19:51.175 "adrfam": "IPv4", 00:19:51.175 "traddr": "10.0.0.2", 00:19:51.175 "trsvcid": "4420" 00:19:51.175 }, 00:19:51.175 "peer_address": { 00:19:51.175 "trtype": "TCP", 00:19:51.175 "adrfam": "IPv4", 00:19:51.175 "traddr": "10.0.0.1", 00:19:51.175 "trsvcid": "59500" 00:19:51.175 }, 00:19:51.175 "auth": { 00:19:51.175 "state": "completed", 00:19:51.175 "digest": "sha512", 00:19:51.175 "dhgroup": "ffdhe4096" 00:19:51.175 } 00:19:51.175 } 00:19:51.175 ]' 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.175 11:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.741 11:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.674 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.240 11:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.805 00:19:53.805 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.805 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.805 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.063 { 00:19:54.063 "cntlid": 125, 00:19:54.063 "qid": 0, 00:19:54.063 "state": "enabled", 00:19:54.063 "thread": "nvmf_tgt_poll_group_000", 00:19:54.063 "listen_address": { 00:19:54.063 "trtype": "TCP", 00:19:54.063 "adrfam": "IPv4", 00:19:54.063 "traddr": "10.0.0.2", 00:19:54.063 "trsvcid": "4420" 00:19:54.063 }, 00:19:54.063 "peer_address": { 00:19:54.063 "trtype": "TCP", 00:19:54.063 "adrfam": "IPv4", 00:19:54.063 "traddr": "10.0.0.1", 00:19:54.063 "trsvcid": "52952" 00:19:54.063 }, 00:19:54.063 "auth": { 00:19:54.063 "state": "completed", 00:19:54.063 "digest": "sha512", 00:19:54.063 "dhgroup": "ffdhe4096" 00:19:54.063 } 00:19:54.063 } 00:19:54.063 ]' 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.063 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.321 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.321 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.321 11:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.579 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.952 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.216 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.194 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.194 { 00:19:57.194 "cntlid": 127, 00:19:57.194 "qid": 0, 00:19:57.194 "state": "enabled", 00:19:57.194 "thread": "nvmf_tgt_poll_group_000", 00:19:57.194 "listen_address": { 00:19:57.194 "trtype": "TCP", 00:19:57.194 "adrfam": "IPv4", 00:19:57.194 "traddr": "10.0.0.2", 00:19:57.194 "trsvcid": "4420" 00:19:57.194 }, 00:19:57.194 "peer_address": { 00:19:57.194 "trtype": "TCP", 00:19:57.194 "adrfam": "IPv4", 00:19:57.194 "traddr": "10.0.0.1", 00:19:57.194 "trsvcid": "52976" 00:19:57.194 }, 00:19:57.194 "auth": { 00:19:57.194 "state": "completed", 00:19:57.194 "digest": "sha512", 00:19:57.194 "dhgroup": "ffdhe4096" 00:19:57.194 } 00:19:57.194 } 00:19:57.194 ]' 00:19:57.194 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.452 11:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.017 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:19:58.948 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.949 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.514 11:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.076 00:20:00.076 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.076 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.076 11:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.639 { 00:20:00.639 "cntlid": 129, 00:20:00.639 "qid": 0, 00:20:00.639 "state": "enabled", 00:20:00.639 "thread": "nvmf_tgt_poll_group_000", 00:20:00.639 "listen_address": { 00:20:00.639 "trtype": "TCP", 00:20:00.639 "adrfam": "IPv4", 00:20:00.639 "traddr": "10.0.0.2", 00:20:00.639 "trsvcid": "4420" 00:20:00.639 }, 00:20:00.639 "peer_address": { 00:20:00.639 "trtype": "TCP", 00:20:00.639 "adrfam": "IPv4", 00:20:00.639 "traddr": "10.0.0.1", 00:20:00.639 "trsvcid": "53000" 00:20:00.639 }, 00:20:00.639 "auth": { 00:20:00.639 "state": "completed", 00:20:00.639 "digest": "sha512", 00:20:00.639 "dhgroup": "ffdhe6144" 00:20:00.639 } 00:20:00.639 } 00:20:00.639 ]' 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.639 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.896 11:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.268 11:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.525 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.088 00:20:03.088 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.088 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.088 11:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.652 { 00:20:03.652 "cntlid": 131, 00:20:03.652 "qid": 0, 00:20:03.652 "state": "enabled", 00:20:03.652 "thread": "nvmf_tgt_poll_group_000", 00:20:03.652 "listen_address": { 00:20:03.652 "trtype": "TCP", 00:20:03.652 "adrfam": "IPv4", 00:20:03.652 "traddr": "10.0.0.2", 00:20:03.652 "trsvcid": "4420" 00:20:03.652 }, 00:20:03.652 "peer_address": { 00:20:03.652 "trtype": "TCP", 00:20:03.652 "adrfam": "IPv4", 00:20:03.652 "traddr": "10.0.0.1", 00:20:03.652 "trsvcid": "53022" 00:20:03.652 }, 00:20:03.652 "auth": { 00:20:03.652 "state": "completed", 00:20:03.652 "digest": "sha512", 00:20:03.652 "dhgroup": "ffdhe6144" 00:20:03.652 } 00:20:03.652 } 00:20:03.652 ]' 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.652 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.216 11:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.146 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.404 11:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.968 00:20:05.968 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.968 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.968 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.534 { 00:20:06.534 "cntlid": 133, 00:20:06.534 "qid": 0, 00:20:06.534 "state": "enabled", 00:20:06.534 "thread": "nvmf_tgt_poll_group_000", 00:20:06.534 "listen_address": { 00:20:06.534 "trtype": "TCP", 00:20:06.534 "adrfam": "IPv4", 00:20:06.534 "traddr": "10.0.0.2", 00:20:06.534 "trsvcid": "4420" 00:20:06.534 }, 00:20:06.534 "peer_address": { 00:20:06.534 "trtype": "TCP", 00:20:06.534 "adrfam": "IPv4", 00:20:06.534 "traddr": "10.0.0.1", 00:20:06.534 "trsvcid": "33404" 00:20:06.534 }, 00:20:06.534 "auth": { 00:20:06.534 "state": "completed", 00:20:06.534 "digest": "sha512", 00:20:06.534 "dhgroup": "ffdhe6144" 00:20:06.534 } 00:20:06.534 } 00:20:06.534 ]' 00:20:06.534 11:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.534 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.100 11:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.031 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.288 11:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.221 00:20:09.221 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.221 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.221 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.479 { 00:20:09.479 "cntlid": 135, 00:20:09.479 "qid": 0, 00:20:09.479 "state": "enabled", 00:20:09.479 "thread": "nvmf_tgt_poll_group_000", 00:20:09.479 "listen_address": { 00:20:09.479 "trtype": "TCP", 00:20:09.479 "adrfam": "IPv4", 00:20:09.479 "traddr": "10.0.0.2", 00:20:09.479 "trsvcid": "4420" 00:20:09.479 }, 00:20:09.479 "peer_address": { 00:20:09.479 "trtype": "TCP", 00:20:09.479 "adrfam": "IPv4", 00:20:09.479 "traddr": "10.0.0.1", 00:20:09.479 "trsvcid": "33434" 00:20:09.479 }, 00:20:09.479 "auth": { 00:20:09.479 "state": "completed", 00:20:09.479 "digest": "sha512", 00:20:09.479 "dhgroup": "ffdhe6144" 00:20:09.479 } 00:20:09.479 } 00:20:09.479 ]' 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.479 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.479 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.479 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.479 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.044 11:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:11.007 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.574 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.507 00:20:12.507 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.507 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.507 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.073 { 00:20:13.073 "cntlid": 137, 00:20:13.073 "qid": 0, 00:20:13.073 "state": "enabled", 00:20:13.073 "thread": "nvmf_tgt_poll_group_000", 00:20:13.073 "listen_address": { 00:20:13.073 "trtype": "TCP", 00:20:13.073 "adrfam": "IPv4", 00:20:13.073 "traddr": "10.0.0.2", 00:20:13.073 "trsvcid": "4420" 00:20:13.073 }, 00:20:13.073 "peer_address": { 00:20:13.073 "trtype": "TCP", 00:20:13.073 "adrfam": "IPv4", 00:20:13.073 "traddr": "10.0.0.1", 00:20:13.073 "trsvcid": "33462" 00:20:13.073 }, 00:20:13.073 "auth": { 00:20:13.073 "state": "completed", 00:20:13.073 "digest": "sha512", 00:20:13.073 "dhgroup": "ffdhe8192" 00:20:13.073 } 00:20:13.073 } 00:20:13.073 ]' 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.073 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.331 11:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:20:14.704 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.704 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:14.705 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.705 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.705 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.705 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.705 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:14.705 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.962 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.336 00:20:16.336 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.336 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.336 11:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.902 { 00:20:16.902 "cntlid": 139, 00:20:16.902 "qid": 0, 00:20:16.902 "state": "enabled", 00:20:16.902 "thread": "nvmf_tgt_poll_group_000", 00:20:16.902 "listen_address": { 00:20:16.902 "trtype": "TCP", 00:20:16.902 "adrfam": "IPv4", 00:20:16.902 "traddr": "10.0.0.2", 00:20:16.902 "trsvcid": "4420" 00:20:16.902 }, 00:20:16.902 "peer_address": { 00:20:16.902 "trtype": "TCP", 00:20:16.902 "adrfam": "IPv4", 00:20:16.902 "traddr": "10.0.0.1", 00:20:16.902 "trsvcid": "36984" 00:20:16.902 }, 00:20:16.902 "auth": { 00:20:16.902 "state": "completed", 00:20:16.902 "digest": "sha512", 00:20:16.902 "dhgroup": "ffdhe8192" 00:20:16.902 } 00:20:16.902 } 00:20:16.902 ]' 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.902 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.467 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MGJhMmQyMDAwMjQ5NmNjZGY5MGYyMDA5Mzg2ODk4MzLMrl2/: --dhchap-ctrl-secret DHHC-1:02:OTNlOTU5NWI2OTY5YzMwNGU0ZDc2ZTlhZjY5NDhhYTNiYWYxODdjM2M5ZTQ4Nzg2Cu2x/g==: 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.839 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.772 00:20:19.772 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.772 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.772 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.337 { 00:20:20.337 "cntlid": 141, 00:20:20.337 "qid": 0, 00:20:20.337 "state": "enabled", 00:20:20.337 "thread": "nvmf_tgt_poll_group_000", 00:20:20.337 "listen_address": { 00:20:20.337 "trtype": "TCP", 00:20:20.337 "adrfam": "IPv4", 00:20:20.337 "traddr": "10.0.0.2", 00:20:20.337 "trsvcid": "4420" 00:20:20.337 }, 00:20:20.337 "peer_address": { 00:20:20.337 "trtype": "TCP", 00:20:20.337 "adrfam": "IPv4", 00:20:20.337 "traddr": "10.0.0.1", 00:20:20.337 "trsvcid": "37010" 00:20:20.337 }, 00:20:20.337 "auth": { 00:20:20.337 "state": "completed", 00:20:20.337 "digest": "sha512", 00:20:20.337 "dhgroup": "ffdhe8192" 00:20:20.337 } 00:20:20.337 } 00:20:20.337 ]' 00:20:20.337 11:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.595 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.853 11:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:YzRkM2JiMDBlNjMwZWEzN2U0ZWY1ZjU3YzlhMmE0MTNlZWMzMDlmZTczN2M3NGU3tqVIdQ==: --dhchap-ctrl-secret DHHC-1:01:OWQ5NjI1OGE5N2ViNGQ0NTRiMDcxYjYxMDBlNTY3OWGFzkPl: 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.275 11:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.840 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:22.840 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.840 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.841 11:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.774 00:20:23.775 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.775 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.775 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.032 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.033 { 00:20:24.033 "cntlid": 143, 00:20:24.033 "qid": 0, 00:20:24.033 "state": "enabled", 00:20:24.033 "thread": "nvmf_tgt_poll_group_000", 00:20:24.033 "listen_address": { 00:20:24.033 "trtype": "TCP", 00:20:24.033 "adrfam": "IPv4", 00:20:24.033 "traddr": "10.0.0.2", 00:20:24.033 "trsvcid": "4420" 00:20:24.033 }, 00:20:24.033 "peer_address": { 00:20:24.033 "trtype": "TCP", 00:20:24.033 "adrfam": "IPv4", 00:20:24.033 "traddr": "10.0.0.1", 00:20:24.033 "trsvcid": "37022" 00:20:24.033 }, 00:20:24.033 "auth": { 00:20:24.033 "state": "completed", 00:20:24.033 "digest": "sha512", 00:20:24.033 "dhgroup": "ffdhe8192" 00:20:24.033 } 00:20:24.033 } 00:20:24.033 ]' 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.033 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.291 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.291 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.291 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.291 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.291 11:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.857 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.247 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.638 00:20:27.638 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.638 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.638 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.895 { 00:20:27.895 "cntlid": 145, 00:20:27.895 "qid": 0, 00:20:27.895 "state": "enabled", 00:20:27.895 "thread": "nvmf_tgt_poll_group_000", 00:20:27.895 "listen_address": { 00:20:27.895 "trtype": "TCP", 00:20:27.895 "adrfam": "IPv4", 00:20:27.895 "traddr": "10.0.0.2", 00:20:27.895 "trsvcid": "4420" 00:20:27.895 }, 00:20:27.895 "peer_address": { 00:20:27.895 "trtype": "TCP", 00:20:27.895 "adrfam": "IPv4", 00:20:27.895 "traddr": "10.0.0.1", 00:20:27.895 "trsvcid": "47236" 00:20:27.895 }, 00:20:27.895 "auth": { 00:20:27.895 "state": "completed", 00:20:27.895 "digest": "sha512", 00:20:27.895 "dhgroup": "ffdhe8192" 00:20:27.895 } 00:20:27.895 } 00:20:27.895 ]' 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.895 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.460 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZjA0NDY0NTdmNGIyMDI4N2Q1ZTA0M2QzZmIxMWVkOWY2YjcyNDAxZGFkYzE3NWI04BxqDw==: --dhchap-ctrl-secret DHHC-1:03:ZGM1MGExMDMyMmQxMGExZGY2YWNiNGQxMDZlOTc5MjQyYjk3ZGYyMWM1N2U1NjE2NWQ3YjBhYjA3ODkyY2QzYrI9A0o=: 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.393 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:29.394 11:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:30.775 request: 00:20:30.775 { 00:20:30.775 "name": "nvme0", 00:20:30.775 "trtype": "tcp", 00:20:30.775 "traddr": "10.0.0.2", 00:20:30.775 "adrfam": "ipv4", 00:20:30.775 "trsvcid": "4420", 00:20:30.775 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:30.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:30.775 "prchk_reftag": false, 00:20:30.775 "prchk_guard": false, 00:20:30.775 "hdgst": false, 00:20:30.775 "ddgst": false, 00:20:30.775 "dhchap_key": "key2", 00:20:30.775 "method": "bdev_nvme_attach_controller", 00:20:30.775 "req_id": 1 00:20:30.775 } 00:20:30.775 Got JSON-RPC error response 00:20:30.775 response: 00:20:30.775 { 00:20:30.775 "code": -5, 00:20:30.775 "message": "Input/output error" 00:20:30.775 } 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:30.775 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:31.711 request: 00:20:31.711 { 00:20:31.711 "name": "nvme0", 00:20:31.711 "trtype": "tcp", 00:20:31.711 "traddr": "10.0.0.2", 00:20:31.711 "adrfam": "ipv4", 00:20:31.711 "trsvcid": "4420", 00:20:31.711 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:31.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:31.711 "prchk_reftag": false, 00:20:31.711 "prchk_guard": false, 00:20:31.711 "hdgst": false, 00:20:31.711 "ddgst": false, 00:20:31.711 "dhchap_key": "key1", 00:20:31.711 "dhchap_ctrlr_key": "ckey2", 00:20:31.711 "method": "bdev_nvme_attach_controller", 00:20:31.711 "req_id": 1 00:20:31.711 } 00:20:31.711 Got JSON-RPC error response 00:20:31.711 response: 00:20:31.711 { 00:20:31.711 "code": -5, 00:20:31.711 "message": "Input/output error" 00:20:31.711 } 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.711 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:31.712 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:31.712 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:31.712 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:31.712 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.712 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.644 request: 00:20:32.644 { 00:20:32.644 "name": "nvme0", 00:20:32.644 "trtype": "tcp", 00:20:32.644 "traddr": "10.0.0.2", 00:20:32.644 "adrfam": "ipv4", 00:20:32.644 "trsvcid": "4420", 00:20:32.644 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:32.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:32.644 "prchk_reftag": false, 00:20:32.644 "prchk_guard": false, 00:20:32.644 "hdgst": false, 00:20:32.644 "ddgst": false, 00:20:32.644 "dhchap_key": "key1", 00:20:32.644 "dhchap_ctrlr_key": "ckey1", 00:20:32.644 "method": "bdev_nvme_attach_controller", 00:20:32.644 "req_id": 1 00:20:32.644 } 00:20:32.644 Got JSON-RPC error response 00:20:32.644 response: 00:20:32.644 { 00:20:32.644 "code": -5, 00:20:32.644 "message": "Input/output error" 00:20:32.644 } 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2108906 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2108906 ']' 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2108906 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2108906 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2108906' 00:20:32.644 killing process with pid 2108906 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2108906 00:20:32.644 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2108906 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2137365 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2137365 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2137365 ']' 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.914 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2137365 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2137365 ']' 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.484 11:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.741 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.675 00:20:34.933 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.933 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.933 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.191 { 00:20:35.191 "cntlid": 1, 00:20:35.191 "qid": 0, 00:20:35.191 "state": "enabled", 00:20:35.191 "thread": "nvmf_tgt_poll_group_000", 00:20:35.191 "listen_address": { 00:20:35.191 "trtype": "TCP", 00:20:35.191 "adrfam": "IPv4", 00:20:35.191 "traddr": "10.0.0.2", 00:20:35.191 "trsvcid": "4420" 00:20:35.191 }, 00:20:35.191 "peer_address": { 00:20:35.191 "trtype": "TCP", 00:20:35.191 "adrfam": "IPv4", 00:20:35.191 "traddr": "10.0.0.1", 00:20:35.191 "trsvcid": "36378" 00:20:35.191 }, 00:20:35.191 "auth": { 00:20:35.191 "state": "completed", 00:20:35.191 "digest": "sha512", 00:20:35.191 "dhgroup": "ffdhe8192" 00:20:35.191 } 00:20:35.191 } 00:20:35.191 ]' 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.191 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.448 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.448 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.449 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.014 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZjJlN2RiYzI5ZTVlN2MwZDdkMmRiN2VlMDJhODgzNDJmYzk3NDE5NDNmNTlhOTczN2MxNDg1NjM5MTcwMzQzODgS8Js=: 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:36.948 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.514 11:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:37.772 request: 00:20:37.772 { 00:20:37.772 "name": "nvme0", 00:20:37.772 "trtype": "tcp", 00:20:37.772 "traddr": "10.0.0.2", 00:20:37.772 "adrfam": "ipv4", 00:20:37.772 "trsvcid": "4420", 00:20:37.772 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:37.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:37.772 "prchk_reftag": false, 00:20:37.772 "prchk_guard": false, 00:20:37.772 "hdgst": false, 00:20:37.772 "ddgst": false, 00:20:37.772 "dhchap_key": "key3", 00:20:37.772 "method": "bdev_nvme_attach_controller", 00:20:37.772 "req_id": 1 00:20:37.772 } 00:20:37.772 Got JSON-RPC error response 00:20:37.772 response: 00:20:37.772 { 00:20:37.772 "code": -5, 00:20:37.772 "message": "Input/output error" 00:20:37.772 } 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:37.772 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.030 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.288 request: 00:20:38.288 { 00:20:38.288 "name": "nvme0", 00:20:38.288 "trtype": "tcp", 00:20:38.288 "traddr": "10.0.0.2", 00:20:38.288 "adrfam": "ipv4", 00:20:38.288 "trsvcid": "4420", 00:20:38.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:38.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:38.288 "prchk_reftag": false, 00:20:38.288 "prchk_guard": false, 00:20:38.288 "hdgst": false, 00:20:38.288 "ddgst": false, 00:20:38.288 "dhchap_key": "key3", 00:20:38.288 "method": "bdev_nvme_attach_controller", 00:20:38.288 "req_id": 1 00:20:38.288 } 00:20:38.288 Got JSON-RPC error response 00:20:38.288 response: 00:20:38.288 { 00:20:38.288 "code": -5, 00:20:38.288 "message": "Input/output error" 00:20:38.288 } 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.288 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.546 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:38.803 request: 00:20:38.803 { 00:20:38.803 "name": "nvme0", 00:20:38.803 "trtype": "tcp", 00:20:38.803 "traddr": "10.0.0.2", 00:20:38.803 "adrfam": "ipv4", 00:20:38.803 "trsvcid": "4420", 00:20:38.803 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:38.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:38.803 "prchk_reftag": false, 00:20:38.803 "prchk_guard": false, 00:20:38.803 "hdgst": false, 00:20:38.803 "ddgst": false, 00:20:38.803 "dhchap_key": "key0", 00:20:38.803 "dhchap_ctrlr_key": "key1", 00:20:38.803 "method": "bdev_nvme_attach_controller", 00:20:38.803 "req_id": 1 00:20:38.803 } 00:20:38.803 Got JSON-RPC error response 00:20:38.803 response: 00:20:38.803 { 00:20:38.803 "code": -5, 00:20:38.803 "message": "Input/output error" 00:20:38.803 } 00:20:38.803 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:38.803 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.803 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.803 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.803 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.803 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:39.366 00:20:39.366 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:39.366 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.366 11:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:39.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.624 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2108972 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2108972 ']' 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2108972 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2108972 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2108972' 00:20:39.881 killing process with pid 2108972 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2108972 00:20:39.881 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2108972 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.445 rmmod nvme_tcp 00:20:40.445 rmmod nvme_fabrics 00:20:40.445 rmmod nvme_keyring 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2137365 ']' 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2137365 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2137365 ']' 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2137365 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2137365 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2137365' 00:20:40.445 killing process with pid 2137365 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2137365 00:20:40.445 11:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2137365 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.722 11:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ONJ /tmp/spdk.key-sha256.9S6 /tmp/spdk.key-sha384.ojt /tmp/spdk.key-sha512.HxW /tmp/spdk.key-sha512.Le3 /tmp/spdk.key-sha384.zFs /tmp/spdk.key-sha256.t1p '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:43.261 00:20:43.261 real 4m6.381s 00:20:43.261 user 9m49.477s 00:20:43.261 sys 0m32.500s 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.261 ************************************ 00:20:43.261 END TEST nvmf_auth_target 00:20:43.261 ************************************ 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.261 ************************************ 00:20:43.261 START TEST nvmf_bdevio_no_huge 00:20:43.261 ************************************ 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:43.261 * Looking for test storage... 00:20:43.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.261 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.262 11:28:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:45.796 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:45.796 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.796 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:45.797 Found net devices under 0000:84:00.0: cvl_0_0 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:45.797 Found net devices under 0000:84:00.1: cvl_0_1 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.797 11:28:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:45.797 00:20:45.797 --- 10.0.0.2 ping statistics --- 00:20:45.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.797 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:45.797 00:20:45.797 --- 10.0.0.1 ping statistics --- 00:20:45.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.797 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2140286 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2140286 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2140286 ']' 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.797 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:45.797 [2024-07-26 11:28:41.216039] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:20:45.797 [2024-07-26 11:28:41.216128] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:45.797 [2024-07-26 11:28:41.315636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.056 [2024-07-26 11:28:41.526379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.056 [2024-07-26 11:28:41.526526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.056 [2024-07-26 11:28:41.526545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.056 [2024-07-26 11:28:41.526558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.056 [2024-07-26 11:28:41.526570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.056 [2024-07-26 11:28:41.526668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.056 [2024-07-26 11:28:41.526754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:46.056 [2024-07-26 11:28:41.526838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:46.056 [2024-07-26 11:28:41.526841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.623 [2024-07-26 11:28:42.232509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.623 Malloc0 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.623 [2024-07-26 11:28:42.273272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.623 { 00:20:46.623 "params": { 00:20:46.623 "name": "Nvme$subsystem", 00:20:46.623 "trtype": "$TEST_TRANSPORT", 00:20:46.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.623 "adrfam": "ipv4", 00:20:46.623 "trsvcid": "$NVMF_PORT", 00:20:46.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.623 "hdgst": ${hdgst:-false}, 00:20:46.623 "ddgst": ${ddgst:-false} 00:20:46.623 }, 00:20:46.623 "method": "bdev_nvme_attach_controller" 00:20:46.623 } 00:20:46.623 EOF 00:20:46.623 )") 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:46.623 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:46.882 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:46.882 11:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:46.882 "params": { 00:20:46.882 "name": "Nvme1", 00:20:46.882 "trtype": "tcp", 00:20:46.882 "traddr": "10.0.0.2", 00:20:46.882 "adrfam": "ipv4", 00:20:46.882 "trsvcid": "4420", 00:20:46.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.882 "hdgst": false, 00:20:46.882 "ddgst": false 00:20:46.882 }, 00:20:46.882 "method": "bdev_nvme_attach_controller" 00:20:46.882 }' 00:20:46.882 [2024-07-26 11:28:42.325350] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:20:46.882 [2024-07-26 11:28:42.325474] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2140440 ] 00:20:46.882 [2024-07-26 11:28:42.441395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:47.140 [2024-07-26 11:28:42.568457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.140 [2024-07-26 11:28:42.568487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.140 [2024-07-26 11:28:42.568491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.140 I/O targets: 00:20:47.140 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:47.140 00:20:47.140 00:20:47.140 CUnit - A unit testing framework for C - Version 2.1-3 00:20:47.140 http://cunit.sourceforge.net/ 00:20:47.140 00:20:47.140 00:20:47.140 Suite: bdevio tests on: Nvme1n1 00:20:47.140 Test: blockdev write read block ...passed 00:20:47.399 Test: blockdev write zeroes read block ...passed 00:20:47.399 Test: blockdev write zeroes read no split ...passed 00:20:47.399 Test: blockdev write zeroes read split ...passed 00:20:47.399 Test: blockdev write zeroes read split partial ...passed 00:20:47.399 Test: blockdev reset ...[2024-07-26 11:28:42.955216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.399 [2024-07-26 11:28:42.955333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2436670 (9): Bad file descriptor 00:20:47.399 [2024-07-26 11:28:42.967464] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:47.399 passed 00:20:47.399 Test: blockdev write read 8 blocks ...passed 00:20:47.399 Test: blockdev write read size > 128k ...passed 00:20:47.399 Test: blockdev write read invalid size ...passed 00:20:47.399 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:47.399 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:47.399 Test: blockdev write read max offset ...passed 00:20:47.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:47.657 Test: blockdev writev readv 8 blocks ...passed 00:20:47.657 Test: blockdev writev readv 30 x 1block ...passed 00:20:47.657 Test: blockdev writev readv block ...passed 00:20:47.657 Test: blockdev writev readv size > 128k ...passed 00:20:47.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:47.657 Test: blockdev comparev and writev ...[2024-07-26 11:28:43.183890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.183977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.184541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.184594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.184613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.185157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.185190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.185215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.185233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.185770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.185798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.185821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:47.657 [2024-07-26 11:28:43.185839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:47.657 passed 00:20:47.657 Test: blockdev nvme passthru rw ...passed 00:20:47.657 Test: blockdev nvme passthru vendor specific ...[2024-07-26 11:28:43.267912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.657 [2024-07-26 11:28:43.267942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.268259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.657 [2024-07-26 11:28:43.268286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.268595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.657 [2024-07-26 11:28:43.268621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:47.657 [2024-07-26 11:28:43.268828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:47.657 [2024-07-26 11:28:43.268853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:47.657 passed 00:20:47.657 Test: blockdev nvme admin passthru ...passed 00:20:47.913 Test: blockdev copy ...passed 00:20:47.913 00:20:47.913 Run Summary: Type Total Ran Passed Failed Inactive 00:20:47.913 suites 1 1 n/a 0 0 00:20:47.913 tests 23 23 23 0 0 00:20:47.913 asserts 152 152 152 0 n/a 00:20:47.913 00:20:47.913 Elapsed time = 1.174 seconds 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.171 rmmod nvme_tcp 00:20:48.171 rmmod nvme_fabrics 00:20:48.171 rmmod nvme_keyring 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2140286 ']' 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2140286 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2140286 ']' 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2140286 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2140286 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2140286' 00:20:48.171 killing process with pid 2140286 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2140286 00:20:48.171 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2140286 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.108 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.012 00:20:51.012 real 0m8.131s 00:20:51.012 user 0m13.917s 00:20:51.012 sys 0m3.329s 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:51.012 ************************************ 00:20:51.012 END TEST nvmf_bdevio_no_huge 00:20:51.012 ************************************ 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.012 ************************************ 00:20:51.012 START TEST nvmf_tls 00:20:51.012 ************************************ 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:51.012 * Looking for test storage... 00:20:51.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.012 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.013 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.544 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.544 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.544 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.544 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.544 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:53.545 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:53.545 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:53.545 Found net devices under 0000:84:00.0: cvl_0_0 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:53.545 Found net devices under 0000:84:00.1: cvl_0_1 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.545 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:20:53.805 00:20:53.805 --- 10.0.0.2 ping statistics --- 00:20:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.805 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:20:53.805 00:20:53.805 --- 10.0.0.1 ping statistics --- 00:20:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.805 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2142656 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2142656 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2142656 ']' 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.805 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.805 [2024-07-26 11:28:49.438581] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:20:53.805 [2024-07-26 11:28:49.438679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.063 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.063 [2024-07-26 11:28:49.559575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.322 [2024-07-26 11:28:49.740391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.322 [2024-07-26 11:28:49.740503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.322 [2024-07-26 11:28:49.740533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.322 [2024-07-26 11:28:49.740555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.322 [2024-07-26 11:28:49.740576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.322 [2024-07-26 11:28:49.740616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:54.322 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:54.581 true 00:20:54.581 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.581 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:55.147 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:55.147 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:55.148 11:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:55.714 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:55.714 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:55.972 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:55.972 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:55.972 11:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:56.538 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:56.538 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:56.796 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:56.796 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:56.796 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:56.797 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:57.054 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:57.055 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:57.055 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:57.650 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:57.650 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:57.907 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:57.907 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:57.907 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:58.165 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:58.165 11:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:58.423 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.rXsmxjtCAA 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Pz1HTDHR4X 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rXsmxjtCAA 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Pz1HTDHR4X 00:20:58.681 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:58.939 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:59.197 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rXsmxjtCAA 00:20:59.197 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rXsmxjtCAA 00:20:59.197 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.764 [2024-07-26 11:28:55.180374] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.764 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.036 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.974 [2024-07-26 11:28:56.275397] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.974 [2024-07-26 11:28:56.275710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.974 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:01.232 malloc0 00:21:01.233 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:01.492 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rXsmxjtCAA 00:21:01.750 [2024-07-26 11:28:57.376269] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:01.750 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rXsmxjtCAA 00:21:02.008 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.975 Initializing NVMe Controllers 00:21:11.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.975 Initialization complete. Launching workers. 00:21:11.975 ======================================================== 00:21:11.975 Latency(us) 00:21:11.975 Device Information : IOPS MiB/s Average min max 00:21:11.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7314.68 28.57 8752.52 1343.09 10443.86 00:21:11.975 ======================================================== 00:21:11.975 Total : 7314.68 28.57 8752.52 1343.09 10443.86 00:21:11.975 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rXsmxjtCAA 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rXsmxjtCAA' 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2144809 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2144809 /var/tmp/bdevperf.sock 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2144809 ']' 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.975 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.975 [2024-07-26 11:29:07.558138] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:11.975 [2024-07-26 11:29:07.558218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144809 ] 00:21:11.975 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.975 [2024-07-26 11:29:07.627062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.234 [2024-07-26 11:29:07.766194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.234 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.234 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:12.234 11:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rXsmxjtCAA 00:21:12.492 [2024-07-26 11:29:08.115185] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.492 [2024-07-26 11:29:08.115337] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:12.750 TLSTESTn1 00:21:12.750 11:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:12.750 Running I/O for 10 seconds... 00:21:24.953 00:21:24.953 Latency(us) 00:21:24.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.953 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:24.953 Verification LBA range: start 0x0 length 0x2000 00:21:24.953 TLSTESTn1 : 10.04 2604.67 10.17 0.00 0.00 49004.91 7427.41 68739.98 00:21:24.953 =================================================================================================================== 00:21:24.953 Total : 2604.67 10.17 0.00 0.00 49004.91 7427.41 68739.98 00:21:24.953 0 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2144809 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2144809 ']' 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2144809 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144809 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144809' 00:21:24.953 killing process with pid 2144809 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2144809 00:21:24.953 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.953 00:21:24.953 Latency(us) 00:21:24.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.953 =================================================================================================================== 00:21:24.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.953 [2024-07-26 11:29:18.449922] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2144809 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pz1HTDHR4X 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pz1HTDHR4X 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:24.953 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Pz1HTDHR4X 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Pz1HTDHR4X' 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2146075 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2146075 /var/tmp/bdevperf.sock 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2146075 ']' 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.954 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.954 [2024-07-26 11:29:18.834865] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:24.954 [2024-07-26 11:29:18.834972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146075 ] 00:21:24.954 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.954 [2024-07-26 11:29:18.919578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.954 [2024-07-26 11:29:19.075364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Pz1HTDHR4X 00:21:24.954 [2024-07-26 11:29:19.813933] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.954 [2024-07-26 11:29:19.814089] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.954 [2024-07-26 11:29:19.826267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:24.954 [2024-07-26 11:29:19.827062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbe6d0 (107): Transport endpoint is not connected 00:21:24.954 [2024-07-26 11:29:19.828051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbe6d0 (9): Bad file descriptor 00:21:24.954 [2024-07-26 11:29:19.829053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.954 [2024-07-26 11:29:19.829081] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:24.954 [2024-07-26 11:29:19.829105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.954 request: 00:21:24.954 { 00:21:24.954 "name": "TLSTEST", 00:21:24.954 "trtype": "tcp", 00:21:24.954 "traddr": "10.0.0.2", 00:21:24.954 "adrfam": "ipv4", 00:21:24.954 "trsvcid": "4420", 00:21:24.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.954 "prchk_reftag": false, 00:21:24.954 "prchk_guard": false, 00:21:24.954 "hdgst": false, 00:21:24.954 "ddgst": false, 00:21:24.954 "psk": "/tmp/tmp.Pz1HTDHR4X", 00:21:24.954 "method": "bdev_nvme_attach_controller", 00:21:24.954 "req_id": 1 00:21:24.954 } 00:21:24.954 Got JSON-RPC error response 00:21:24.954 response: 00:21:24.954 { 00:21:24.954 "code": -5, 00:21:24.954 "message": "Input/output error" 00:21:24.954 } 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2146075 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2146075 ']' 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2146075 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146075 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146075' 00:21:24.954 killing process with pid 2146075 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2146075 00:21:24.954 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.954 00:21:24.954 Latency(us) 00:21:24.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.954 =================================================================================================================== 00:21:24.954 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:24.954 [2024-07-26 11:29:19.896229] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:24.954 11:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2146075 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rXsmxjtCAA 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rXsmxjtCAA 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rXsmxjtCAA 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rXsmxjtCAA' 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2146268 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2146268 /var/tmp/bdevperf.sock 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2146268 ']' 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.954 [2024-07-26 11:29:20.249444] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:24.954 [2024-07-26 11:29:20.249550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146268 ] 00:21:24.954 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.954 [2024-07-26 11:29:20.324593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.954 [2024-07-26 11:29:20.462642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:24.954 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rXsmxjtCAA 00:21:25.212 [2024-07-26 11:29:20.868155] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.212 [2024-07-26 11:29:20.868310] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:25.471 [2024-07-26 11:29:20.878855] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:25.471 [2024-07-26 11:29:20.878900] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:25.471 [2024-07-26 11:29:20.878955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:25.471 [2024-07-26 11:29:20.879933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175f6d0 (107): Transport endpoint is not connected 00:21:25.472 [2024-07-26 11:29:20.880919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175f6d0 (9): Bad file descriptor 00:21:25.472 [2024-07-26 11:29:20.881916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.472 [2024-07-26 11:29:20.881944] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:25.472 [2024-07-26 11:29:20.881968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.472 request: 00:21:25.472 { 00:21:25.472 "name": "TLSTEST", 00:21:25.472 "trtype": "tcp", 00:21:25.472 "traddr": "10.0.0.2", 00:21:25.472 "adrfam": "ipv4", 00:21:25.472 "trsvcid": "4420", 00:21:25.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.472 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:25.472 "prchk_reftag": false, 00:21:25.472 "prchk_guard": false, 00:21:25.472 "hdgst": false, 00:21:25.472 "ddgst": false, 00:21:25.472 "psk": "/tmp/tmp.rXsmxjtCAA", 00:21:25.472 "method": "bdev_nvme_attach_controller", 00:21:25.472 "req_id": 1 00:21:25.472 } 00:21:25.472 Got JSON-RPC error response 00:21:25.472 response: 00:21:25.472 { 00:21:25.472 "code": -5, 00:21:25.472 "message": "Input/output error" 00:21:25.472 } 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2146268 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2146268 ']' 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2146268 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146268 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146268' 00:21:25.472 killing process with pid 2146268 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2146268 00:21:25.472 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.472 00:21:25.472 Latency(us) 00:21:25.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.472 =================================================================================================================== 00:21:25.472 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.472 [2024-07-26 11:29:20.931976] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:25.472 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2146268 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rXsmxjtCAA 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rXsmxjtCAA 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rXsmxjtCAA 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rXsmxjtCAA' 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2146402 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2146402 /var/tmp/bdevperf.sock 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2146402 ']' 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.760 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.760 [2024-07-26 11:29:21.288786] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:25.760 [2024-07-26 11:29:21.288888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146402 ] 00:21:25.760 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.760 [2024-07-26 11:29:21.370259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.024 [2024-07-26 11:29:21.509347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.024 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.024 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:26.024 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rXsmxjtCAA 00:21:26.590 [2024-07-26 11:29:22.132896] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.590 [2024-07-26 11:29:22.133044] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:26.590 [2024-07-26 11:29:22.142178] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:26.590 [2024-07-26 11:29:22.142223] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:26.590 [2024-07-26 11:29:22.142279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:26.590 [2024-07-26 11:29:22.143053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a66d0 (107): Transport endpoint is not connected 00:21:26.590 [2024-07-26 11:29:22.144044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a66d0 (9): Bad file descriptor 00:21:26.590 [2024-07-26 11:29:22.145046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:26.590 [2024-07-26 11:29:22.145074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:26.590 [2024-07-26 11:29:22.145098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:26.590 request: 00:21:26.590 { 00:21:26.590 "name": "TLSTEST", 00:21:26.590 "trtype": "tcp", 00:21:26.590 "traddr": "10.0.0.2", 00:21:26.590 "adrfam": "ipv4", 00:21:26.590 "trsvcid": "4420", 00:21:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:26.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.590 "prchk_reftag": false, 00:21:26.590 "prchk_guard": false, 00:21:26.590 "hdgst": false, 00:21:26.590 "ddgst": false, 00:21:26.590 "psk": "/tmp/tmp.rXsmxjtCAA", 00:21:26.590 "method": "bdev_nvme_attach_controller", 00:21:26.590 "req_id": 1 00:21:26.590 } 00:21:26.590 Got JSON-RPC error response 00:21:26.590 response: 00:21:26.590 { 00:21:26.590 "code": -5, 00:21:26.590 "message": "Input/output error" 00:21:26.590 } 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2146402 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2146402 ']' 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2146402 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146402 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146402' 00:21:26.590 killing process with pid 2146402 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2146402 00:21:26.590 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.590 00:21:26.590 Latency(us) 00:21:26.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.590 =================================================================================================================== 00:21:26.590 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:26.590 [2024-07-26 11:29:22.201668] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:26.590 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2146402 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:26.848 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:27.107 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2146547 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2146547 /var/tmp/bdevperf.sock 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2146547 ']' 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.108 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.108 [2024-07-26 11:29:22.566968] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:27.108 [2024-07-26 11:29:22.567080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146547 ] 00:21:27.108 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.108 [2024-07-26 11:29:22.647373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.367 [2024-07-26 11:29:22.787163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.367 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.367 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:27.367 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:27.625 [2024-07-26 11:29:23.245131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:27.625 [2024-07-26 11:29:23.247441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7ae10 (9): Bad file descriptor 00:21:27.625 [2024-07-26 11:29:23.248438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:27.625 [2024-07-26 11:29:23.248480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:27.625 [2024-07-26 11:29:23.248500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.625 request: 00:21:27.625 { 00:21:27.625 "name": "TLSTEST", 00:21:27.625 "trtype": "tcp", 00:21:27.625 "traddr": "10.0.0.2", 00:21:27.625 "adrfam": "ipv4", 00:21:27.625 "trsvcid": "4420", 00:21:27.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.625 "prchk_reftag": false, 00:21:27.625 "prchk_guard": false, 00:21:27.625 "hdgst": false, 00:21:27.626 "ddgst": false, 00:21:27.626 "method": "bdev_nvme_attach_controller", 00:21:27.626 "req_id": 1 00:21:27.626 } 00:21:27.626 Got JSON-RPC error response 00:21:27.626 response: 00:21:27.626 { 00:21:27.626 "code": -5, 00:21:27.626 "message": "Input/output error" 00:21:27.626 } 00:21:27.626 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2146547 00:21:27.626 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2146547 ']' 00:21:27.626 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2146547 00:21:27.626 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:27.626 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.626 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146547 00:21:27.884 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:27.884 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:27.884 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146547' 00:21:27.884 killing process with pid 2146547 00:21:27.884 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2146547 00:21:27.884 Received shutdown signal, test time was about 10.000000 seconds 00:21:27.884 00:21:27.884 Latency(us) 00:21:27.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.884 =================================================================================================================== 00:21:27.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:27.884 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2146547 00:21:28.142 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:28.142 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2142656 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2142656 ']' 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2142656 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2142656 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2142656' 00:21:28.143 killing process with pid 2142656 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2142656 00:21:28.143 [2024-07-26 11:29:23.653756] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:28.143 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2142656 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:28.401 11:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lD0dPRpfsy 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lD0dPRpfsy 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2146706 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2146706 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2146706 ']' 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.660 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.660 [2024-07-26 11:29:24.150069] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:28.660 [2024-07-26 11:29:24.150159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.660 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.660 [2024-07-26 11:29:24.232731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.919 [2024-07-26 11:29:24.386836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.919 [2024-07-26 11:29:24.386903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.919 [2024-07-26 11:29:24.386924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.919 [2024-07-26 11:29:24.386940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.919 [2024-07-26 11:29:24.386954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.919 [2024-07-26 11:29:24.386998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lD0dPRpfsy 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lD0dPRpfsy 00:21:28.919 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:29.486 [2024-07-26 11:29:24.872794] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.486 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:29.744 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:30.002 [2024-07-26 11:29:25.586735] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.002 [2024-07-26 11:29:25.587027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.002 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:30.567 malloc0 00:21:30.567 11:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:30.823 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:21:31.081 [2024-07-26 11:29:26.680036] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD0dPRpfsy 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lD0dPRpfsy' 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2146990 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2146990 /var/tmp/bdevperf.sock 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2146990 ']' 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.081 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.339 [2024-07-26 11:29:26.761604] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:31.339 [2024-07-26 11:29:26.761714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146990 ] 00:21:31.339 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.339 [2024-07-26 11:29:26.842438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.339 [2024-07-26 11:29:26.984414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.597 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.597 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:31.597 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:21:31.854 [2024-07-26 11:29:27.443157] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.854 [2024-07-26 11:29:27.443305] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:32.112 TLSTESTn1 00:21:32.112 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:32.112 Running I/O for 10 seconds... 00:21:44.309 00:21:44.309 Latency(us) 00:21:44.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.309 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.309 Verification LBA range: start 0x0 length 0x2000 00:21:44.309 TLSTESTn1 : 10.03 2636.82 10.30 0.00 0.00 48427.76 8009.96 86604.61 00:21:44.309 =================================================================================================================== 00:21:44.309 Total : 2636.82 10.30 0.00 0.00 48427.76 8009.96 86604.61 00:21:44.309 0 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2146990 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2146990 ']' 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2146990 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146990 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146990' 00:21:44.309 killing process with pid 2146990 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2146990 00:21:44.309 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.309 00:21:44.309 Latency(us) 00:21:44.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.309 =================================================================================================================== 00:21:44.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.309 [2024-07-26 11:29:37.879651] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.309 11:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2146990 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lD0dPRpfsy 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD0dPRpfsy 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD0dPRpfsy 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lD0dPRpfsy 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lD0dPRpfsy' 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2148377 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2148377 /var/tmp/bdevperf.sock 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2148377 ']' 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.309 [2024-07-26 11:29:38.260807] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:44.309 [2024-07-26 11:29:38.260910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148377 ] 00:21:44.309 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.309 [2024-07-26 11:29:38.341552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.309 [2024-07-26 11:29:38.480063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.309 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:21:44.309 [2024-07-26 11:29:39.118856] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.309 [2024-07-26 11:29:39.118970] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:44.309 [2024-07-26 11:29:39.118991] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lD0dPRpfsy 00:21:44.309 request: 00:21:44.309 { 00:21:44.309 "name": "TLSTEST", 00:21:44.309 "trtype": "tcp", 00:21:44.309 "traddr": "10.0.0.2", 00:21:44.309 "adrfam": "ipv4", 00:21:44.309 "trsvcid": "4420", 00:21:44.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.309 "prchk_reftag": false, 00:21:44.309 "prchk_guard": false, 00:21:44.309 "hdgst": false, 00:21:44.309 "ddgst": false, 00:21:44.309 "psk": "/tmp/tmp.lD0dPRpfsy", 00:21:44.309 "method": "bdev_nvme_attach_controller", 00:21:44.309 "req_id": 1 00:21:44.309 } 00:21:44.309 Got JSON-RPC error response 00:21:44.309 response: 00:21:44.309 { 00:21:44.309 "code": -1, 00:21:44.309 "message": "Operation not permitted" 00:21:44.309 } 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2148377 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2148377 ']' 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2148377 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2148377 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2148377' 00:21:44.309 killing process with pid 2148377 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2148377 00:21:44.309 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.309 00:21:44.309 Latency(us) 00:21:44.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.309 =================================================================================================================== 00:21:44.309 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.309 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2148377 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2146706 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2146706 ']' 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2146706 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146706 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146706' 00:21:44.310 killing process with pid 2146706 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2146706 00:21:44.310 [2024-07-26 11:29:39.504103] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2146706 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2148573 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2148573 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2148573 ']' 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.310 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.310 [2024-07-26 11:29:39.883132] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:44.310 [2024-07-26 11:29:39.883246] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.310 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.569 [2024-07-26 11:29:39.973807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.569 [2024-07-26 11:29:40.115803] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.569 [2024-07-26 11:29:40.115875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.569 [2024-07-26 11:29:40.115895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.569 [2024-07-26 11:29:40.115913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.569 [2024-07-26 11:29:40.115927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.569 [2024-07-26 11:29:40.115965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lD0dPRpfsy 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lD0dPRpfsy 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.lD0dPRpfsy 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lD0dPRpfsy 00:21:44.827 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.085 [2024-07-26 11:29:40.679503] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.085 11:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:45.651 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:45.909 [2024-07-26 11:29:41.405513] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.909 [2024-07-26 11:29:41.405789] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.909 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:46.167 malloc0 00:21:46.167 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:46.732 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:21:46.990 [2024-07-26 11:29:42.490392] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:46.990 [2024-07-26 11:29:42.490453] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:46.990 [2024-07-26 11:29:42.490502] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:46.990 request: 00:21:46.990 { 00:21:46.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.990 "host": "nqn.2016-06.io.spdk:host1", 00:21:46.990 "psk": "/tmp/tmp.lD0dPRpfsy", 00:21:46.990 "method": "nvmf_subsystem_add_host", 00:21:46.990 "req_id": 1 00:21:46.990 } 00:21:46.991 Got JSON-RPC error response 00:21:46.991 response: 00:21:46.991 { 00:21:46.991 "code": -32603, 00:21:46.991 "message": "Internal error" 00:21:46.991 } 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2148573 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2148573 ']' 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2148573 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2148573 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2148573' 00:21:46.991 killing process with pid 2148573 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2148573 00:21:46.991 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2148573 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lD0dPRpfsy 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2148952 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2148952 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2148952 ']' 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.356 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.356 [2024-07-26 11:29:42.956202] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:47.356 [2024-07-26 11:29:42.956309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.614 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.614 [2024-07-26 11:29:43.044575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.614 [2024-07-26 11:29:43.181802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.614 [2024-07-26 11:29:43.181881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.614 [2024-07-26 11:29:43.181902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.614 [2024-07-26 11:29:43.181918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.614 [2024-07-26 11:29:43.181932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.614 [2024-07-26 11:29:43.181969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lD0dPRpfsy 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lD0dPRpfsy 00:21:47.871 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.127 [2024-07-26 11:29:43.612326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.127 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:48.385 11:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:48.643 [2024-07-26 11:29:44.270151] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.643 [2024-07-26 11:29:44.270450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.643 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:49.576 malloc0 00:21:49.576 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:49.834 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:21:50.402 [2024-07-26 11:29:45.977316] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2149294 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2149294 /var/tmp/bdevperf.sock 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2149294 ']' 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.402 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.403 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.403 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.662 [2024-07-26 11:29:46.101502] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:50.662 [2024-07-26 11:29:46.101669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149294 ] 00:21:50.662 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.662 [2024-07-26 11:29:46.218936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.920 [2024-07-26 11:29:46.362786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.920 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.920 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:50.920 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:21:51.178 [2024-07-26 11:29:46.810728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.178 [2024-07-26 11:29:46.810891] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.436 TLSTESTn1 00:21:51.436 11:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:51.693 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:51.693 "subsystems": [ 00:21:51.693 { 00:21:51.693 "subsystem": "keyring", 00:21:51.693 "config": [] 00:21:51.693 }, 00:21:51.693 { 00:21:51.693 "subsystem": "iobuf", 00:21:51.693 "config": [ 00:21:51.693 { 00:21:51.693 "method": "iobuf_set_options", 00:21:51.693 "params": { 00:21:51.693 "small_pool_count": 8192, 00:21:51.693 "large_pool_count": 1024, 00:21:51.693 "small_bufsize": 8192, 00:21:51.693 "large_bufsize": 135168 00:21:51.693 } 00:21:51.693 } 00:21:51.693 ] 00:21:51.693 }, 00:21:51.693 { 00:21:51.693 "subsystem": "sock", 00:21:51.693 "config": [ 00:21:51.693 { 00:21:51.693 "method": "sock_set_default_impl", 00:21:51.693 "params": { 00:21:51.693 "impl_name": "posix" 00:21:51.693 } 00:21:51.693 }, 00:21:51.693 { 00:21:51.693 "method": "sock_impl_set_options", 00:21:51.693 "params": { 00:21:51.693 "impl_name": "ssl", 00:21:51.693 "recv_buf_size": 4096, 00:21:51.693 "send_buf_size": 4096, 00:21:51.693 "enable_recv_pipe": true, 00:21:51.693 "enable_quickack": false, 00:21:51.693 "enable_placement_id": 0, 00:21:51.693 "enable_zerocopy_send_server": true, 00:21:51.693 "enable_zerocopy_send_client": false, 00:21:51.693 "zerocopy_threshold": 0, 00:21:51.693 "tls_version": 0, 00:21:51.693 "enable_ktls": false 00:21:51.693 } 00:21:51.693 }, 00:21:51.693 { 00:21:51.693 "method": "sock_impl_set_options", 00:21:51.693 "params": { 00:21:51.694 "impl_name": "posix", 00:21:51.694 "recv_buf_size": 2097152, 00:21:51.694 "send_buf_size": 2097152, 00:21:51.694 "enable_recv_pipe": true, 00:21:51.694 "enable_quickack": false, 00:21:51.694 "enable_placement_id": 0, 00:21:51.694 "enable_zerocopy_send_server": true, 00:21:51.694 "enable_zerocopy_send_client": false, 00:21:51.694 "zerocopy_threshold": 0, 00:21:51.694 "tls_version": 0, 00:21:51.694 "enable_ktls": false 00:21:51.694 } 00:21:51.694 } 00:21:51.694 ] 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "subsystem": "vmd", 00:21:51.694 "config": [] 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "subsystem": "accel", 00:21:51.694 "config": [ 00:21:51.694 { 00:21:51.694 "method": "accel_set_options", 00:21:51.694 "params": { 00:21:51.694 "small_cache_size": 128, 00:21:51.694 "large_cache_size": 16, 00:21:51.694 "task_count": 2048, 00:21:51.694 "sequence_count": 2048, 00:21:51.694 "buf_count": 2048 00:21:51.694 } 00:21:51.694 } 00:21:51.694 ] 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "subsystem": "bdev", 00:21:51.694 "config": [ 00:21:51.694 { 00:21:51.694 "method": "bdev_set_options", 00:21:51.694 "params": { 00:21:51.694 "bdev_io_pool_size": 65535, 00:21:51.694 "bdev_io_cache_size": 256, 00:21:51.694 "bdev_auto_examine": true, 00:21:51.694 "iobuf_small_cache_size": 128, 00:21:51.694 "iobuf_large_cache_size": 16 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "bdev_raid_set_options", 00:21:51.694 "params": { 00:21:51.694 "process_window_size_kb": 1024, 00:21:51.694 "process_max_bandwidth_mb_sec": 0 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "bdev_iscsi_set_options", 00:21:51.694 "params": { 00:21:51.694 "timeout_sec": 30 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "bdev_nvme_set_options", 00:21:51.694 "params": { 00:21:51.694 "action_on_timeout": "none", 00:21:51.694 "timeout_us": 0, 00:21:51.694 "timeout_admin_us": 0, 00:21:51.694 "keep_alive_timeout_ms": 10000, 00:21:51.694 "arbitration_burst": 0, 00:21:51.694 "low_priority_weight": 0, 00:21:51.694 "medium_priority_weight": 0, 00:21:51.694 "high_priority_weight": 0, 00:21:51.694 "nvme_adminq_poll_period_us": 10000, 00:21:51.694 "nvme_ioq_poll_period_us": 0, 00:21:51.694 "io_queue_requests": 0, 00:21:51.694 "delay_cmd_submit": true, 00:21:51.694 "transport_retry_count": 4, 00:21:51.694 "bdev_retry_count": 3, 00:21:51.694 "transport_ack_timeout": 0, 00:21:51.694 "ctrlr_loss_timeout_sec": 0, 00:21:51.694 "reconnect_delay_sec": 0, 00:21:51.694 "fast_io_fail_timeout_sec": 0, 00:21:51.694 "disable_auto_failback": false, 00:21:51.694 "generate_uuids": false, 00:21:51.694 "transport_tos": 0, 00:21:51.694 "nvme_error_stat": false, 00:21:51.694 "rdma_srq_size": 0, 00:21:51.694 "io_path_stat": false, 00:21:51.694 "allow_accel_sequence": false, 00:21:51.694 "rdma_max_cq_size": 0, 00:21:51.694 "rdma_cm_event_timeout_ms": 0, 00:21:51.694 "dhchap_digests": [ 00:21:51.694 "sha256", 00:21:51.694 "sha384", 00:21:51.694 "sha512" 00:21:51.694 ], 00:21:51.694 "dhchap_dhgroups": [ 00:21:51.694 "null", 00:21:51.694 "ffdhe2048", 00:21:51.694 "ffdhe3072", 00:21:51.694 "ffdhe4096", 00:21:51.694 "ffdhe6144", 00:21:51.694 "ffdhe8192" 00:21:51.694 ] 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "bdev_nvme_set_hotplug", 00:21:51.694 "params": { 00:21:51.694 "period_us": 100000, 00:21:51.694 "enable": false 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "bdev_malloc_create", 00:21:51.694 "params": { 00:21:51.694 "name": "malloc0", 00:21:51.694 "num_blocks": 8192, 00:21:51.694 "block_size": 4096, 00:21:51.694 "physical_block_size": 4096, 00:21:51.694 "uuid": "76adb288-90b1-4fae-a1a2-6b380455b9ea", 00:21:51.694 "optimal_io_boundary": 0, 00:21:51.694 "md_size": 0, 00:21:51.694 "dif_type": 0, 00:21:51.694 "dif_is_head_of_md": false, 00:21:51.694 "dif_pi_format": 0 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "bdev_wait_for_examine" 00:21:51.694 } 00:21:51.694 ] 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "subsystem": "nbd", 00:21:51.694 "config": [] 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "subsystem": "scheduler", 00:21:51.694 "config": [ 00:21:51.694 { 00:21:51.694 "method": "framework_set_scheduler", 00:21:51.694 "params": { 00:21:51.694 "name": "static" 00:21:51.694 } 00:21:51.694 } 00:21:51.694 ] 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "subsystem": "nvmf", 00:21:51.694 "config": [ 00:21:51.694 { 00:21:51.694 "method": "nvmf_set_config", 00:21:51.694 "params": { 00:21:51.694 "discovery_filter": "match_any", 00:21:51.694 "admin_cmd_passthru": { 00:21:51.694 "identify_ctrlr": false 00:21:51.694 } 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_set_max_subsystems", 00:21:51.694 "params": { 00:21:51.694 "max_subsystems": 1024 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_set_crdt", 00:21:51.694 "params": { 00:21:51.694 "crdt1": 0, 00:21:51.694 "crdt2": 0, 00:21:51.694 "crdt3": 0 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_create_transport", 00:21:51.694 "params": { 00:21:51.694 "trtype": "TCP", 00:21:51.694 "max_queue_depth": 128, 00:21:51.694 "max_io_qpairs_per_ctrlr": 127, 00:21:51.694 "in_capsule_data_size": 4096, 00:21:51.694 "max_io_size": 131072, 00:21:51.694 "io_unit_size": 131072, 00:21:51.694 "max_aq_depth": 128, 00:21:51.694 "num_shared_buffers": 511, 00:21:51.694 "buf_cache_size": 4294967295, 00:21:51.694 "dif_insert_or_strip": false, 00:21:51.694 "zcopy": false, 00:21:51.694 "c2h_success": false, 00:21:51.694 "sock_priority": 0, 00:21:51.694 "abort_timeout_sec": 1, 00:21:51.694 "ack_timeout": 0, 00:21:51.694 "data_wr_pool_size": 0 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_create_subsystem", 00:21:51.694 "params": { 00:21:51.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.694 "allow_any_host": false, 00:21:51.694 "serial_number": "SPDK00000000000001", 00:21:51.694 "model_number": "SPDK bdev Controller", 00:21:51.694 "max_namespaces": 10, 00:21:51.694 "min_cntlid": 1, 00:21:51.694 "max_cntlid": 65519, 00:21:51.694 "ana_reporting": false 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_subsystem_add_host", 00:21:51.694 "params": { 00:21:51.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.694 "host": "nqn.2016-06.io.spdk:host1", 00:21:51.694 "psk": "/tmp/tmp.lD0dPRpfsy" 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_subsystem_add_ns", 00:21:51.694 "params": { 00:21:51.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.694 "namespace": { 00:21:51.694 "nsid": 1, 00:21:51.694 "bdev_name": "malloc0", 00:21:51.694 "nguid": "76ADB28890B14FAEA1A26B380455B9EA", 00:21:51.694 "uuid": "76adb288-90b1-4fae-a1a2-6b380455b9ea", 00:21:51.694 "no_auto_visible": false 00:21:51.694 } 00:21:51.694 } 00:21:51.694 }, 00:21:51.694 { 00:21:51.694 "method": "nvmf_subsystem_add_listener", 00:21:51.694 "params": { 00:21:51.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.694 "listen_address": { 00:21:51.694 "trtype": "TCP", 00:21:51.694 "adrfam": "IPv4", 00:21:51.695 "traddr": "10.0.0.2", 00:21:51.695 "trsvcid": "4420" 00:21:51.695 }, 00:21:51.695 "secure_channel": true 00:21:51.695 } 00:21:51.695 } 00:21:51.695 ] 00:21:51.695 } 00:21:51.695 ] 00:21:51.695 }' 00:21:51.695 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:52.259 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:52.259 "subsystems": [ 00:21:52.259 { 00:21:52.259 "subsystem": "keyring", 00:21:52.259 "config": [] 00:21:52.259 }, 00:21:52.259 { 00:21:52.259 "subsystem": "iobuf", 00:21:52.259 "config": [ 00:21:52.259 { 00:21:52.259 "method": "iobuf_set_options", 00:21:52.259 "params": { 00:21:52.259 "small_pool_count": 8192, 00:21:52.259 "large_pool_count": 1024, 00:21:52.259 "small_bufsize": 8192, 00:21:52.259 "large_bufsize": 135168 00:21:52.259 } 00:21:52.259 } 00:21:52.259 ] 00:21:52.259 }, 00:21:52.259 { 00:21:52.259 "subsystem": "sock", 00:21:52.259 "config": [ 00:21:52.259 { 00:21:52.259 "method": "sock_set_default_impl", 00:21:52.259 "params": { 00:21:52.259 "impl_name": "posix" 00:21:52.259 } 00:21:52.259 }, 00:21:52.259 { 00:21:52.259 "method": "sock_impl_set_options", 00:21:52.259 "params": { 00:21:52.259 "impl_name": "ssl", 00:21:52.259 "recv_buf_size": 4096, 00:21:52.259 "send_buf_size": 4096, 00:21:52.259 "enable_recv_pipe": true, 00:21:52.259 "enable_quickack": false, 00:21:52.259 "enable_placement_id": 0, 00:21:52.259 "enable_zerocopy_send_server": true, 00:21:52.259 "enable_zerocopy_send_client": false, 00:21:52.259 "zerocopy_threshold": 0, 00:21:52.259 "tls_version": 0, 00:21:52.259 "enable_ktls": false 00:21:52.259 } 00:21:52.259 }, 00:21:52.259 { 00:21:52.259 "method": "sock_impl_set_options", 00:21:52.259 "params": { 00:21:52.259 "impl_name": "posix", 00:21:52.259 "recv_buf_size": 2097152, 00:21:52.259 "send_buf_size": 2097152, 00:21:52.259 "enable_recv_pipe": true, 00:21:52.259 "enable_quickack": false, 00:21:52.259 "enable_placement_id": 0, 00:21:52.259 "enable_zerocopy_send_server": true, 00:21:52.259 "enable_zerocopy_send_client": false, 00:21:52.259 "zerocopy_threshold": 0, 00:21:52.259 "tls_version": 0, 00:21:52.259 "enable_ktls": false 00:21:52.259 } 00:21:52.259 } 00:21:52.259 ] 00:21:52.259 }, 00:21:52.259 { 00:21:52.259 "subsystem": "vmd", 00:21:52.259 "config": [] 00:21:52.259 }, 00:21:52.259 { 00:21:52.259 "subsystem": "accel", 00:21:52.259 "config": [ 00:21:52.259 { 00:21:52.259 "method": "accel_set_options", 00:21:52.259 "params": { 00:21:52.259 "small_cache_size": 128, 00:21:52.259 "large_cache_size": 16, 00:21:52.259 "task_count": 2048, 00:21:52.259 "sequence_count": 2048, 00:21:52.259 "buf_count": 2048 00:21:52.259 } 00:21:52.260 } 00:21:52.260 ] 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "subsystem": "bdev", 00:21:52.260 "config": [ 00:21:52.260 { 00:21:52.260 "method": "bdev_set_options", 00:21:52.260 "params": { 00:21:52.260 "bdev_io_pool_size": 65535, 00:21:52.260 "bdev_io_cache_size": 256, 00:21:52.260 "bdev_auto_examine": true, 00:21:52.260 "iobuf_small_cache_size": 128, 00:21:52.260 "iobuf_large_cache_size": 16 00:21:52.260 } 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "method": "bdev_raid_set_options", 00:21:52.260 "params": { 00:21:52.260 "process_window_size_kb": 1024, 00:21:52.260 "process_max_bandwidth_mb_sec": 0 00:21:52.260 } 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "method": "bdev_iscsi_set_options", 00:21:52.260 "params": { 00:21:52.260 "timeout_sec": 30 00:21:52.260 } 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "method": "bdev_nvme_set_options", 00:21:52.260 "params": { 00:21:52.260 "action_on_timeout": "none", 00:21:52.260 "timeout_us": 0, 00:21:52.260 "timeout_admin_us": 0, 00:21:52.260 "keep_alive_timeout_ms": 10000, 00:21:52.260 "arbitration_burst": 0, 00:21:52.260 "low_priority_weight": 0, 00:21:52.260 "medium_priority_weight": 0, 00:21:52.260 "high_priority_weight": 0, 00:21:52.260 "nvme_adminq_poll_period_us": 10000, 00:21:52.260 "nvme_ioq_poll_period_us": 0, 00:21:52.260 "io_queue_requests": 512, 00:21:52.260 "delay_cmd_submit": true, 00:21:52.260 "transport_retry_count": 4, 00:21:52.260 "bdev_retry_count": 3, 00:21:52.260 "transport_ack_timeout": 0, 00:21:52.260 "ctrlr_loss_timeout_sec": 0, 00:21:52.260 "reconnect_delay_sec": 0, 00:21:52.260 "fast_io_fail_timeout_sec": 0, 00:21:52.260 "disable_auto_failback": false, 00:21:52.260 "generate_uuids": false, 00:21:52.260 "transport_tos": 0, 00:21:52.260 "nvme_error_stat": false, 00:21:52.260 "rdma_srq_size": 0, 00:21:52.260 "io_path_stat": false, 00:21:52.260 "allow_accel_sequence": false, 00:21:52.260 "rdma_max_cq_size": 0, 00:21:52.260 "rdma_cm_event_timeout_ms": 0, 00:21:52.260 "dhchap_digests": [ 00:21:52.260 "sha256", 00:21:52.260 "sha384", 00:21:52.260 "sha512" 00:21:52.260 ], 00:21:52.260 "dhchap_dhgroups": [ 00:21:52.260 "null", 00:21:52.260 "ffdhe2048", 00:21:52.260 "ffdhe3072", 00:21:52.260 "ffdhe4096", 00:21:52.260 "ffdhe6144", 00:21:52.260 "ffdhe8192" 00:21:52.260 ] 00:21:52.260 } 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "method": "bdev_nvme_attach_controller", 00:21:52.260 "params": { 00:21:52.260 "name": "TLSTEST", 00:21:52.260 "trtype": "TCP", 00:21:52.260 "adrfam": "IPv4", 00:21:52.260 "traddr": "10.0.0.2", 00:21:52.260 "trsvcid": "4420", 00:21:52.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.260 "prchk_reftag": false, 00:21:52.260 "prchk_guard": false, 00:21:52.260 "ctrlr_loss_timeout_sec": 0, 00:21:52.260 "reconnect_delay_sec": 0, 00:21:52.260 "fast_io_fail_timeout_sec": 0, 00:21:52.260 "psk": "/tmp/tmp.lD0dPRpfsy", 00:21:52.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.260 "hdgst": false, 00:21:52.260 "ddgst": false 00:21:52.260 } 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "method": "bdev_nvme_set_hotplug", 00:21:52.260 "params": { 00:21:52.260 "period_us": 100000, 00:21:52.260 "enable": false 00:21:52.260 } 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "method": "bdev_wait_for_examine" 00:21:52.260 } 00:21:52.260 ] 00:21:52.260 }, 00:21:52.260 { 00:21:52.260 "subsystem": "nbd", 00:21:52.260 "config": [] 00:21:52.260 } 00:21:52.260 ] 00:21:52.260 }' 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2149294 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2149294 ']' 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2149294 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149294 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149294' 00:21:52.260 killing process with pid 2149294 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2149294 00:21:52.260 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.260 00:21:52.260 Latency(us) 00:21:52.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.260 =================================================================================================================== 00:21:52.260 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:52.260 [2024-07-26 11:29:47.704364] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:52.260 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2149294 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2148952 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2148952 ']' 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2148952 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2148952 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2148952' 00:21:52.517 killing process with pid 2148952 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2148952 00:21:52.517 [2024-07-26 11:29:48.080615] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:52.517 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2148952 00:21:52.777 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:52.777 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.777 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:52.777 "subsystems": [ 00:21:52.777 { 00:21:52.777 "subsystem": "keyring", 00:21:52.777 "config": [] 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "subsystem": "iobuf", 00:21:52.777 "config": [ 00:21:52.777 { 00:21:52.777 "method": "iobuf_set_options", 00:21:52.777 "params": { 00:21:52.777 "small_pool_count": 8192, 00:21:52.777 "large_pool_count": 1024, 00:21:52.777 "small_bufsize": 8192, 00:21:52.777 "large_bufsize": 135168 00:21:52.777 } 00:21:52.777 } 00:21:52.777 ] 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "subsystem": "sock", 00:21:52.777 "config": [ 00:21:52.777 { 00:21:52.777 "method": "sock_set_default_impl", 00:21:52.777 "params": { 00:21:52.777 "impl_name": "posix" 00:21:52.777 } 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "method": "sock_impl_set_options", 00:21:52.777 "params": { 00:21:52.777 "impl_name": "ssl", 00:21:52.777 "recv_buf_size": 4096, 00:21:52.777 "send_buf_size": 4096, 00:21:52.777 "enable_recv_pipe": true, 00:21:52.777 "enable_quickack": false, 00:21:52.777 "enable_placement_id": 0, 00:21:52.777 "enable_zerocopy_send_server": true, 00:21:52.777 "enable_zerocopy_send_client": false, 00:21:52.777 "zerocopy_threshold": 0, 00:21:52.777 "tls_version": 0, 00:21:52.777 "enable_ktls": false 00:21:52.777 } 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "method": "sock_impl_set_options", 00:21:52.777 "params": { 00:21:52.777 "impl_name": "posix", 00:21:52.777 "recv_buf_size": 2097152, 00:21:52.777 "send_buf_size": 2097152, 00:21:52.777 "enable_recv_pipe": true, 00:21:52.777 "enable_quickack": false, 00:21:52.777 "enable_placement_id": 0, 00:21:52.777 "enable_zerocopy_send_server": true, 00:21:52.777 "enable_zerocopy_send_client": false, 00:21:52.777 "zerocopy_threshold": 0, 00:21:52.777 "tls_version": 0, 00:21:52.777 "enable_ktls": false 00:21:52.777 } 00:21:52.777 } 00:21:52.777 ] 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "subsystem": "vmd", 00:21:52.777 "config": [] 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "subsystem": "accel", 00:21:52.777 "config": [ 00:21:52.777 { 00:21:52.777 "method": "accel_set_options", 00:21:52.777 "params": { 00:21:52.777 "small_cache_size": 128, 00:21:52.777 "large_cache_size": 16, 00:21:52.777 "task_count": 2048, 00:21:52.777 "sequence_count": 2048, 00:21:52.777 "buf_count": 2048 00:21:52.777 } 00:21:52.777 } 00:21:52.777 ] 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "subsystem": "bdev", 00:21:52.777 "config": [ 00:21:52.777 { 00:21:52.777 "method": "bdev_set_options", 00:21:52.777 "params": { 00:21:52.777 "bdev_io_pool_size": 65535, 00:21:52.777 "bdev_io_cache_size": 256, 00:21:52.777 "bdev_auto_examine": true, 00:21:52.777 "iobuf_small_cache_size": 128, 00:21:52.777 "iobuf_large_cache_size": 16 00:21:52.777 } 00:21:52.777 }, 00:21:52.777 { 00:21:52.777 "method": "bdev_raid_set_options", 00:21:52.777 "params": { 00:21:52.777 "process_window_size_kb": 1024, 00:21:52.778 "process_max_bandwidth_mb_sec": 0 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "bdev_iscsi_set_options", 00:21:52.778 "params": { 00:21:52.778 "timeout_sec": 30 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "bdev_nvme_set_options", 00:21:52.778 "params": { 00:21:52.778 "action_on_timeout": "none", 00:21:52.778 "timeout_us": 0, 00:21:52.778 "timeout_admin_us": 0, 00:21:52.778 "keep_alive_timeout_ms": 10000, 00:21:52.778 "arbitration_burst": 0, 00:21:52.778 "low_priority_weight": 0, 00:21:52.778 "medium_priority_weight": 0, 00:21:52.778 "high_priority_weight": 0, 00:21:52.778 "nvme_adminq_poll_period_us": 10000, 00:21:52.778 "nvme_ioq_poll_period_us": 0, 00:21:52.778 "io_queue_requests": 0, 00:21:52.778 "delay_cmd_submit": true, 00:21:52.778 "transport_retry_count": 4, 00:21:52.778 "bdev_retry_count": 3, 00:21:52.778 "transport_ack_timeout": 0, 00:21:52.778 "ctrlr_loss_timeout_sec": 0, 00:21:52.778 "reconnect_delay_sec": 0, 00:21:52.778 "fast_io_fail_timeout_sec": 0, 00:21:52.778 "disable_auto_failback": false, 00:21:52.778 "generate_uuids": false, 00:21:52.778 "transport_tos": 0, 00:21:52.778 "nvme_error_stat": false, 00:21:52.778 "rdma_srq_size": 0, 00:21:52.778 "io_path_stat": false, 00:21:52.778 "allow_accel_sequence": false, 00:21:52.778 "rdma_max_cq_size": 0, 00:21:52.778 "rdma_cm_event_timeout_ms": 0, 00:21:52.778 "dhchap_digests": [ 00:21:52.778 "sha256", 00:21:52.778 "sha384", 00:21:52.778 "sha512" 00:21:52.778 ], 00:21:52.778 "dhchap_dhgroups": [ 00:21:52.778 "null", 00:21:52.778 "ffdhe2048", 00:21:52.778 "ffdhe3072", 00:21:52.778 "ffdhe4096", 00:21:52.778 "ffdhe6144", 00:21:52.778 "ffdhe8192" 00:21:52.778 ] 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "bdev_nvme_set_hotplug", 00:21:52.778 "params": { 00:21:52.778 "period_us": 100000, 00:21:52.778 "enable": false 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "bdev_malloc_create", 00:21:52.778 "params": { 00:21:52.778 "name": "malloc0", 00:21:52.778 "num_blocks": 8192, 00:21:52.778 "block_size": 4096, 00:21:52.778 "physical_block_size": 4096, 00:21:52.778 "uuid": "76adb288-90b1-4fae-a1a2-6b380455b9ea", 00:21:52.778 "optimal_io_boundary": 0, 00:21:52.778 "md_size": 0, 00:21:52.778 "dif_type": 0, 00:21:52.778 "dif_is_head_of_md": false, 00:21:52.778 "dif_pi_format": 0 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "bdev_wait_for_examine" 00:21:52.778 } 00:21:52.778 ] 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "subsystem": "nbd", 00:21:52.778 "config": [] 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "subsystem": "scheduler", 00:21:52.778 "config": [ 00:21:52.778 { 00:21:52.778 "method": "framework_set_scheduler", 00:21:52.778 "params": { 00:21:52.778 "name": "static" 00:21:52.778 } 00:21:52.778 } 00:21:52.778 ] 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "subsystem": "nvmf", 00:21:52.778 "config": [ 00:21:52.778 { 00:21:52.778 "method": "nvmf_set_config", 00:21:52.778 "params": { 00:21:52.778 "discovery_filter": "match_any", 00:21:52.778 "admin_cmd_passthru": { 00:21:52.778 "identify_ctrlr": false 00:21:52.778 } 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_set_max_subsystems", 00:21:52.778 "params": { 00:21:52.778 "max_subsystems": 1024 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_set_crdt", 00:21:52.778 "params": { 00:21:52.778 "crdt1": 0, 00:21:52.778 "crdt2": 0, 00:21:52.778 "crdt3": 0 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_create_transport", 00:21:52.778 "params": { 00:21:52.778 "trtype": "TCP", 00:21:52.778 "max_queue_depth": 128, 00:21:52.778 "max_io_qpairs_per_ctrlr": 127, 00:21:52.778 "in_capsule_data_size": 4096, 00:21:52.778 "max_io_size": 131072, 00:21:52.778 "io_unit_size": 131072, 00:21:52.778 "max_aq_depth": 128, 00:21:52.778 "num_shared_buffers": 511, 00:21:52.778 "buf_cache_size": 4294967295, 00:21:52.778 "dif_insert_or_strip": false, 00:21:52.778 "zcopy": false, 00:21:52.778 "c2h_success": false, 00:21:52.778 "sock_priority": 0, 00:21:52.778 "abort_timeout_sec": 1, 00:21:52.778 "ack_timeout": 0, 00:21:52.778 "data_wr_pool_size": 0 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_create_subsystem", 00:21:52.778 "params": { 00:21:52.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.778 "allow_any_host": false, 00:21:52.778 "serial_number": "SPDK00000000000001", 00:21:52.778 "model_number": "SPDK bdev Controller", 00:21:52.778 "max_namespaces": 10, 00:21:52.778 "min_cntlid": 1, 00:21:52.778 "max_cntlid": 65519, 00:21:52.778 "ana_reporting": false 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_subsystem_add_host", 00:21:52.778 "params": { 00:21:52.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.778 "host": "nqn.2016-06.io.spdk:host1", 00:21:52.778 "psk": "/tmp/tmp.lD0dPRpfsy" 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_subsystem_add_ns", 00:21:52.778 "params": { 00:21:52.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.778 "namespace": { 00:21:52.778 "nsid": 1, 00:21:52.778 "bdev_name": "malloc0", 00:21:52.778 "nguid": "76ADB28890B14FAEA1A26B380455B9EA", 00:21:52.778 "uuid": "76adb288-90b1-4fae-a1a2-6b380455b9ea", 00:21:52.778 "no_auto_visible": false 00:21:52.778 } 00:21:52.778 } 00:21:52.778 }, 00:21:52.778 { 00:21:52.778 "method": "nvmf_subsystem_add_listener", 00:21:52.778 "params": { 00:21:52.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.778 "listen_address": { 00:21:52.778 "trtype": "TCP", 00:21:52.778 "adrfam": "IPv4", 00:21:52.778 "traddr": "10.0.0.2", 00:21:52.778 "trsvcid": "4420" 00:21:52.778 }, 00:21:52.778 "secure_channel": true 00:21:52.778 } 00:21:52.778 } 00:21:52.778 ] 00:21:52.778 } 00:21:52.778 ] 00:21:52.778 }' 00:21:52.778 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.778 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.778 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2149578 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2149578 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2149578 ']' 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.779 11:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.037 [2024-07-26 11:29:48.484543] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:53.037 [2024-07-26 11:29:48.484652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.037 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.037 [2024-07-26 11:29:48.566824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.295 [2024-07-26 11:29:48.714521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.295 [2024-07-26 11:29:48.714577] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.296 [2024-07-26 11:29:48.714594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.296 [2024-07-26 11:29:48.714607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.296 [2024-07-26 11:29:48.714619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.296 [2024-07-26 11:29:48.714717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.555 [2024-07-26 11:29:48.973536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.555 [2024-07-26 11:29:48.999415] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:53.555 [2024-07-26 11:29:49.015505] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.555 [2024-07-26 11:29:49.015769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2149716 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2149716 /var/tmp/bdevperf.sock 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2149716 ']' 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.555 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:53.555 "subsystems": [ 00:21:53.555 { 00:21:53.555 "subsystem": "keyring", 00:21:53.555 "config": [] 00:21:53.555 }, 00:21:53.555 { 00:21:53.555 "subsystem": "iobuf", 00:21:53.555 "config": [ 00:21:53.555 { 00:21:53.555 "method": "iobuf_set_options", 00:21:53.555 "params": { 00:21:53.555 "small_pool_count": 8192, 00:21:53.555 "large_pool_count": 1024, 00:21:53.555 "small_bufsize": 8192, 00:21:53.555 "large_bufsize": 135168 00:21:53.555 } 00:21:53.555 } 00:21:53.555 ] 00:21:53.555 }, 00:21:53.555 { 00:21:53.555 "subsystem": "sock", 00:21:53.555 "config": [ 00:21:53.555 { 00:21:53.556 "method": "sock_set_default_impl", 00:21:53.556 "params": { 00:21:53.556 "impl_name": "posix" 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "sock_impl_set_options", 00:21:53.556 "params": { 00:21:53.556 "impl_name": "ssl", 00:21:53.556 "recv_buf_size": 4096, 00:21:53.556 "send_buf_size": 4096, 00:21:53.556 "enable_recv_pipe": true, 00:21:53.556 "enable_quickack": false, 00:21:53.556 "enable_placement_id": 0, 00:21:53.556 "enable_zerocopy_send_server": true, 00:21:53.556 "enable_zerocopy_send_client": false, 00:21:53.556 "zerocopy_threshold": 0, 00:21:53.556 "tls_version": 0, 00:21:53.556 "enable_ktls": false 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "sock_impl_set_options", 00:21:53.556 "params": { 00:21:53.556 "impl_name": "posix", 00:21:53.556 "recv_buf_size": 2097152, 00:21:53.556 "send_buf_size": 2097152, 00:21:53.556 "enable_recv_pipe": true, 00:21:53.556 "enable_quickack": false, 00:21:53.556 "enable_placement_id": 0, 00:21:53.556 "enable_zerocopy_send_server": true, 00:21:53.556 "enable_zerocopy_send_client": false, 00:21:53.556 "zerocopy_threshold": 0, 00:21:53.556 "tls_version": 0, 00:21:53.556 "enable_ktls": false 00:21:53.556 } 00:21:53.556 } 00:21:53.556 ] 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "subsystem": "vmd", 00:21:53.556 "config": [] 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "subsystem": "accel", 00:21:53.556 "config": [ 00:21:53.556 { 00:21:53.556 "method": "accel_set_options", 00:21:53.556 "params": { 00:21:53.556 "small_cache_size": 128, 00:21:53.556 "large_cache_size": 16, 00:21:53.556 "task_count": 2048, 00:21:53.556 "sequence_count": 2048, 00:21:53.556 "buf_count": 2048 00:21:53.556 } 00:21:53.556 } 00:21:53.556 ] 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "subsystem": "bdev", 00:21:53.556 "config": [ 00:21:53.556 { 00:21:53.556 "method": "bdev_set_options", 00:21:53.556 "params": { 00:21:53.556 "bdev_io_pool_size": 65535, 00:21:53.556 "bdev_io_cache_size": 256, 00:21:53.556 "bdev_auto_examine": true, 00:21:53.556 "iobuf_small_cache_size": 128, 00:21:53.556 "iobuf_large_cache_size": 16 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "bdev_raid_set_options", 00:21:53.556 "params": { 00:21:53.556 "process_window_size_kb": 1024, 00:21:53.556 "process_max_bandwidth_mb_sec": 0 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "bdev_iscsi_set_options", 00:21:53.556 "params": { 00:21:53.556 "timeout_sec": 30 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "bdev_nvme_set_options", 00:21:53.556 "params": { 00:21:53.556 "action_on_timeout": "none", 00:21:53.556 "timeout_us": 0, 00:21:53.556 "timeout_admin_us": 0, 00:21:53.556 "keep_alive_timeout_ms": 10000, 00:21:53.556 "arbitration_burst": 0, 00:21:53.556 "low_priority_weight": 0, 00:21:53.556 "medium_priority_weight": 0, 00:21:53.556 "high_priority_weight": 0, 00:21:53.556 "nvme_adminq_poll_period_us": 10000, 00:21:53.556 "nvme_ioq_poll_period_us": 0, 00:21:53.556 "io_queue_requests": 512, 00:21:53.556 "delay_cmd_submit": true, 00:21:53.556 "transport_retry_count": 4, 00:21:53.556 "bdev_retry_count": 3, 00:21:53.556 "transport_ack_timeout": 0, 00:21:53.556 "ctrlr_loss_timeout_sec": 0, 00:21:53.556 "reconnect_delay_sec": 0, 00:21:53.556 "fast_io_fail_timeout_sec": 0, 00:21:53.556 "disable_auto_failback": false, 00:21:53.556 "generate_uuids": false, 00:21:53.556 "transport_tos": 0, 00:21:53.556 "nvme_error_stat": false, 00:21:53.556 "rdma_srq_size": 0, 00:21:53.556 "io_path_stat": false, 00:21:53.556 "allow_accel_sequence": false, 00:21:53.556 "rdma_max_cq_size": 0, 00:21:53.556 "rdma_cm_event_timeout_ms": 0, 00:21:53.556 "dhchap_digests": [ 00:21:53.556 "sha256", 00:21:53.556 "sha384", 00:21:53.556 "sha512" 00:21:53.556 ], 00:21:53.556 "dhchap_dhgroups": [ 00:21:53.556 "null", 00:21:53.556 "ffdhe2048", 00:21:53.556 "ffdhe3072", 00:21:53.556 "ffdhe4096", 00:21:53.556 "ffdhe6144", 00:21:53.556 "ffdhe8192" 00:21:53.556 ] 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "bdev_nvme_attach_controller", 00:21:53.556 "params": { 00:21:53.556 "name": "TLSTEST", 00:21:53.556 "trtype": "TCP", 00:21:53.556 "adrfam": "IPv4", 00:21:53.556 "traddr": "10.0.0.2", 00:21:53.556 "trsvcid": "4420", 00:21:53.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.556 "prchk_reftag": false, 00:21:53.556 "prchk_guard": false, 00:21:53.556 "ctrlr_loss_timeout_sec": 0, 00:21:53.556 "reconnect_delay_sec": 0, 00:21:53.556 "fast_io_fail_timeout_sec": 0, 00:21:53.556 "psk": "/tmp/tmp.lD0dPRpfsy", 00:21:53.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.556 "hdgst": false, 00:21:53.556 "ddgst": false 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "bdev_nvme_set_hotplug", 00:21:53.556 "params": { 00:21:53.556 "period_us": 100000, 00:21:53.556 "enable": false 00:21:53.556 } 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "method": "bdev_wait_for_examine" 00:21:53.556 } 00:21:53.556 ] 00:21:53.556 }, 00:21:53.556 { 00:21:53.556 "subsystem": "nbd", 00:21:53.556 "config": [] 00:21:53.556 } 00:21:53.556 ] 00:21:53.556 }' 00:21:53.556 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.556 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.556 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.556 [2024-07-26 11:29:49.134460] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:21:53.556 [2024-07-26 11:29:49.134570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149716 ] 00:21:53.556 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.556 [2024-07-26 11:29:49.214739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.815 [2024-07-26 11:29:49.357044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.073 [2024-07-26 11:29:49.549656] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.073 [2024-07-26 11:29:49.549839] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:55.007 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.007 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:55.007 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:55.008 Running I/O for 10 seconds... 00:22:05.036 00:22:05.036 Latency(us) 00:22:05.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.036 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.036 Verification LBA range: start 0x0 length 0x2000 00:22:05.036 TLSTESTn1 : 10.06 2176.03 8.50 0.00 0.00 58641.81 8107.05 81167.55 00:22:05.036 =================================================================================================================== 00:22:05.036 Total : 2176.03 8.50 0.00 0.00 58641.81 8107.05 81167.55 00:22:05.036 0 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2149716 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2149716 ']' 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2149716 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.036 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149716 00:22:05.296 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:05.296 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:05.296 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149716' 00:22:05.296 killing process with pid 2149716 00:22:05.296 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2149716 00:22:05.296 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.296 00:22:05.296 Latency(us) 00:22:05.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.296 =================================================================================================================== 00:22:05.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.296 [2024-07-26 11:30:00.705665] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:05.296 11:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2149716 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2149578 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2149578 ']' 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2149578 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149578 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149578' 00:22:05.554 killing process with pid 2149578 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2149578 00:22:05.554 [2024-07-26 11:30:01.054365] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.554 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2149578 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2151127 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2151127 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2151127 ']' 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.813 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.072 [2024-07-26 11:30:01.475087] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:06.072 [2024-07-26 11:30:01.475214] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.072 [2024-07-26 11:30:01.566972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.072 [2024-07-26 11:30:01.698149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.072 [2024-07-26 11:30:01.698218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.072 [2024-07-26 11:30:01.698256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.072 [2024-07-26 11:30:01.698278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.072 [2024-07-26 11:30:01.698297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.072 [2024-07-26 11:30:01.698338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lD0dPRpfsy 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lD0dPRpfsy 00:22:06.330 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.895 [2024-07-26 11:30:02.374848] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.895 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.153 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.450 [2024-07-26 11:30:03.004561] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.450 [2024-07-26 11:30:03.004883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.450 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.708 malloc0 00:22:07.708 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.272 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lD0dPRpfsy 00:22:08.530 [2024-07-26 11:30:04.083588] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2151587 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2151587 /var/tmp/bdevperf.sock 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2151587 ']' 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.530 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.788 [2024-07-26 11:30:04.196257] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:08.788 [2024-07-26 11:30:04.196411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151587 ] 00:22:08.788 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.788 [2024-07-26 11:30:04.303028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.788 [2024-07-26 11:30:04.427221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.046 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.046 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:09.046 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD0dPRpfsy 00:22:09.612 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:09.870 [2024-07-26 11:30:05.498823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.128 nvme0n1 00:22:10.128 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.386 Running I/O for 1 seconds... 00:22:11.319 00:22:11.319 Latency(us) 00:22:11.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.319 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:11.319 Verification LBA range: start 0x0 length 0x2000 00:22:11.319 nvme0n1 : 1.05 2185.07 8.54 0.00 0.00 57718.70 6553.60 114955.00 00:22:11.319 =================================================================================================================== 00:22:11.319 Total : 2185.07 8.54 0.00 0.00 57718.70 6553.60 114955.00 00:22:11.319 0 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2151587 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2151587 ']' 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2151587 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151587 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151587' 00:22:11.319 killing process with pid 2151587 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2151587 00:22:11.319 Received shutdown signal, test time was about 1.000000 seconds 00:22:11.319 00:22:11.319 Latency(us) 00:22:11.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.319 =================================================================================================================== 00:22:11.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.319 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2151587 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2151127 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2151127 ']' 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2151127 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151127 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151127' 00:22:11.885 killing process with pid 2151127 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2151127 00:22:11.885 [2024-07-26 11:30:07.278046] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:11.885 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2151127 00:22:12.143 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:12.143 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2151998 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2151998 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2151998 ']' 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.144 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.144 [2024-07-26 11:30:07.649839] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:12.144 [2024-07-26 11:30:07.649953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.144 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.144 [2024-07-26 11:30:07.733358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.402 [2024-07-26 11:30:07.855338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.402 [2024-07-26 11:30:07.855413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.402 [2024-07-26 11:30:07.855450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.402 [2024-07-26 11:30:07.855483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.402 [2024-07-26 11:30:07.855508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.402 [2024-07-26 11:30:07.855551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.402 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.660 [2024-07-26 11:30:08.096989] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.660 malloc0 00:22:12.660 [2024-07-26 11:30:08.129664] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.660 [2024-07-26 11:30:08.140651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2152115 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2152115 /var/tmp/bdevperf.sock 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2152115 ']' 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.660 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.661 [2024-07-26 11:30:08.213307] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:12.661 [2024-07-26 11:30:08.213387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152115 ] 00:22:12.661 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.661 [2024-07-26 11:30:08.281319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.919 [2024-07-26 11:30:08.412042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.920 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.920 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.920 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lD0dPRpfsy 00:22:13.486 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:13.744 [2024-07-26 11:30:09.192977] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.744 nvme0n1 00:22:13.744 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.002 Running I/O for 1 seconds... 00:22:14.932 00:22:14.932 Latency(us) 00:22:14.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.932 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:14.932 Verification LBA range: start 0x0 length 0x2000 00:22:14.932 nvme0n1 : 1.05 2386.13 9.32 0.00 0.00 52496.10 6456.51 73011.96 00:22:14.932 =================================================================================================================== 00:22:14.932 Total : 2386.13 9.32 0.00 0.00 52496.10 6456.51 73011.96 00:22:14.932 0 00:22:14.932 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:14.932 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.932 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.190 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.190 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:15.190 "subsystems": [ 00:22:15.190 { 00:22:15.190 "subsystem": "keyring", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "keyring_file_add_key", 00:22:15.190 "params": { 00:22:15.190 "name": "key0", 00:22:15.190 "path": "/tmp/tmp.lD0dPRpfsy" 00:22:15.190 } 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "iobuf", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "iobuf_set_options", 00:22:15.190 "params": { 00:22:15.190 "small_pool_count": 8192, 00:22:15.190 "large_pool_count": 1024, 00:22:15.190 "small_bufsize": 8192, 00:22:15.190 "large_bufsize": 135168 00:22:15.190 } 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "sock", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "sock_set_default_impl", 00:22:15.190 "params": { 00:22:15.190 "impl_name": "posix" 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "sock_impl_set_options", 00:22:15.190 "params": { 00:22:15.190 "impl_name": "ssl", 00:22:15.190 "recv_buf_size": 4096, 00:22:15.190 "send_buf_size": 4096, 00:22:15.190 "enable_recv_pipe": true, 00:22:15.190 "enable_quickack": false, 00:22:15.190 "enable_placement_id": 0, 00:22:15.190 "enable_zerocopy_send_server": true, 00:22:15.190 "enable_zerocopy_send_client": false, 00:22:15.190 "zerocopy_threshold": 0, 00:22:15.190 "tls_version": 0, 00:22:15.190 "enable_ktls": false 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "sock_impl_set_options", 00:22:15.190 "params": { 00:22:15.190 "impl_name": "posix", 00:22:15.190 "recv_buf_size": 2097152, 00:22:15.190 "send_buf_size": 2097152, 00:22:15.190 "enable_recv_pipe": true, 00:22:15.190 "enable_quickack": false, 00:22:15.190 "enable_placement_id": 0, 00:22:15.190 "enable_zerocopy_send_server": true, 00:22:15.190 "enable_zerocopy_send_client": false, 00:22:15.190 "zerocopy_threshold": 0, 00:22:15.190 "tls_version": 0, 00:22:15.190 "enable_ktls": false 00:22:15.190 } 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "vmd", 00:22:15.190 "config": [] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "accel", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "accel_set_options", 00:22:15.190 "params": { 00:22:15.190 "small_cache_size": 128, 00:22:15.190 "large_cache_size": 16, 00:22:15.190 "task_count": 2048, 00:22:15.190 "sequence_count": 2048, 00:22:15.190 "buf_count": 2048 00:22:15.190 } 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "bdev", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "bdev_set_options", 00:22:15.190 "params": { 00:22:15.190 "bdev_io_pool_size": 65535, 00:22:15.190 "bdev_io_cache_size": 256, 00:22:15.190 "bdev_auto_examine": true, 00:22:15.190 "iobuf_small_cache_size": 128, 00:22:15.190 "iobuf_large_cache_size": 16 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "bdev_raid_set_options", 00:22:15.190 "params": { 00:22:15.190 "process_window_size_kb": 1024, 00:22:15.190 "process_max_bandwidth_mb_sec": 0 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "bdev_iscsi_set_options", 00:22:15.190 "params": { 00:22:15.190 "timeout_sec": 30 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "bdev_nvme_set_options", 00:22:15.190 "params": { 00:22:15.190 "action_on_timeout": "none", 00:22:15.190 "timeout_us": 0, 00:22:15.190 "timeout_admin_us": 0, 00:22:15.190 "keep_alive_timeout_ms": 10000, 00:22:15.190 "arbitration_burst": 0, 00:22:15.190 "low_priority_weight": 0, 00:22:15.190 "medium_priority_weight": 0, 00:22:15.190 "high_priority_weight": 0, 00:22:15.190 "nvme_adminq_poll_period_us": 10000, 00:22:15.190 "nvme_ioq_poll_period_us": 0, 00:22:15.190 "io_queue_requests": 0, 00:22:15.190 "delay_cmd_submit": true, 00:22:15.190 "transport_retry_count": 4, 00:22:15.190 "bdev_retry_count": 3, 00:22:15.190 "transport_ack_timeout": 0, 00:22:15.190 "ctrlr_loss_timeout_sec": 0, 00:22:15.190 "reconnect_delay_sec": 0, 00:22:15.190 "fast_io_fail_timeout_sec": 0, 00:22:15.190 "disable_auto_failback": false, 00:22:15.190 "generate_uuids": false, 00:22:15.190 "transport_tos": 0, 00:22:15.190 "nvme_error_stat": false, 00:22:15.190 "rdma_srq_size": 0, 00:22:15.190 "io_path_stat": false, 00:22:15.190 "allow_accel_sequence": false, 00:22:15.190 "rdma_max_cq_size": 0, 00:22:15.190 "rdma_cm_event_timeout_ms": 0, 00:22:15.190 "dhchap_digests": [ 00:22:15.190 "sha256", 00:22:15.190 "sha384", 00:22:15.190 "sha512" 00:22:15.190 ], 00:22:15.190 "dhchap_dhgroups": [ 00:22:15.190 "null", 00:22:15.190 "ffdhe2048", 00:22:15.190 "ffdhe3072", 00:22:15.190 "ffdhe4096", 00:22:15.190 "ffdhe6144", 00:22:15.190 "ffdhe8192" 00:22:15.190 ] 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "bdev_nvme_set_hotplug", 00:22:15.190 "params": { 00:22:15.190 "period_us": 100000, 00:22:15.190 "enable": false 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "bdev_malloc_create", 00:22:15.190 "params": { 00:22:15.190 "name": "malloc0", 00:22:15.190 "num_blocks": 8192, 00:22:15.190 "block_size": 4096, 00:22:15.190 "physical_block_size": 4096, 00:22:15.190 "uuid": "acd8a535-6c6f-400c-891f-79fbafe20add", 00:22:15.190 "optimal_io_boundary": 0, 00:22:15.190 "md_size": 0, 00:22:15.190 "dif_type": 0, 00:22:15.190 "dif_is_head_of_md": false, 00:22:15.190 "dif_pi_format": 0 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "bdev_wait_for_examine" 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "nbd", 00:22:15.190 "config": [] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "scheduler", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "framework_set_scheduler", 00:22:15.190 "params": { 00:22:15.190 "name": "static" 00:22:15.190 } 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "subsystem": "nvmf", 00:22:15.190 "config": [ 00:22:15.190 { 00:22:15.190 "method": "nvmf_set_config", 00:22:15.190 "params": { 00:22:15.190 "discovery_filter": "match_any", 00:22:15.190 "admin_cmd_passthru": { 00:22:15.190 "identify_ctrlr": false 00:22:15.190 } 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_set_max_subsystems", 00:22:15.190 "params": { 00:22:15.190 "max_subsystems": 1024 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_set_crdt", 00:22:15.190 "params": { 00:22:15.190 "crdt1": 0, 00:22:15.190 "crdt2": 0, 00:22:15.190 "crdt3": 0 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_create_transport", 00:22:15.190 "params": { 00:22:15.190 "trtype": "TCP", 00:22:15.190 "max_queue_depth": 128, 00:22:15.190 "max_io_qpairs_per_ctrlr": 127, 00:22:15.190 "in_capsule_data_size": 4096, 00:22:15.190 "max_io_size": 131072, 00:22:15.190 "io_unit_size": 131072, 00:22:15.190 "max_aq_depth": 128, 00:22:15.190 "num_shared_buffers": 511, 00:22:15.190 "buf_cache_size": 4294967295, 00:22:15.190 "dif_insert_or_strip": false, 00:22:15.190 "zcopy": false, 00:22:15.190 "c2h_success": false, 00:22:15.190 "sock_priority": 0, 00:22:15.190 "abort_timeout_sec": 1, 00:22:15.190 "ack_timeout": 0, 00:22:15.190 "data_wr_pool_size": 0 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_create_subsystem", 00:22:15.190 "params": { 00:22:15.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.190 "allow_any_host": false, 00:22:15.190 "serial_number": "00000000000000000000", 00:22:15.190 "model_number": "SPDK bdev Controller", 00:22:15.190 "max_namespaces": 32, 00:22:15.190 "min_cntlid": 1, 00:22:15.190 "max_cntlid": 65519, 00:22:15.190 "ana_reporting": false 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_subsystem_add_host", 00:22:15.190 "params": { 00:22:15.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.190 "host": "nqn.2016-06.io.spdk:host1", 00:22:15.190 "psk": "key0" 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_subsystem_add_ns", 00:22:15.190 "params": { 00:22:15.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.190 "namespace": { 00:22:15.190 "nsid": 1, 00:22:15.190 "bdev_name": "malloc0", 00:22:15.190 "nguid": "ACD8A5356C6F400C891F79FBAFE20ADD", 00:22:15.190 "uuid": "acd8a535-6c6f-400c-891f-79fbafe20add", 00:22:15.190 "no_auto_visible": false 00:22:15.190 } 00:22:15.190 } 00:22:15.190 }, 00:22:15.190 { 00:22:15.190 "method": "nvmf_subsystem_add_listener", 00:22:15.190 "params": { 00:22:15.190 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.190 "listen_address": { 00:22:15.190 "trtype": "TCP", 00:22:15.190 "adrfam": "IPv4", 00:22:15.190 "traddr": "10.0.0.2", 00:22:15.190 "trsvcid": "4420" 00:22:15.190 }, 00:22:15.190 "secure_channel": false, 00:22:15.190 "sock_impl": "ssl" 00:22:15.190 } 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 } 00:22:15.190 ] 00:22:15.190 }' 00:22:15.190 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:15.447 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:15.447 "subsystems": [ 00:22:15.447 { 00:22:15.447 "subsystem": "keyring", 00:22:15.447 "config": [ 00:22:15.447 { 00:22:15.447 "method": "keyring_file_add_key", 00:22:15.447 "params": { 00:22:15.447 "name": "key0", 00:22:15.447 "path": "/tmp/tmp.lD0dPRpfsy" 00:22:15.447 } 00:22:15.447 } 00:22:15.447 ] 00:22:15.447 }, 00:22:15.447 { 00:22:15.447 "subsystem": "iobuf", 00:22:15.447 "config": [ 00:22:15.447 { 00:22:15.447 "method": "iobuf_set_options", 00:22:15.447 "params": { 00:22:15.447 "small_pool_count": 8192, 00:22:15.447 "large_pool_count": 1024, 00:22:15.447 "small_bufsize": 8192, 00:22:15.447 "large_bufsize": 135168 00:22:15.447 } 00:22:15.447 } 00:22:15.447 ] 00:22:15.447 }, 00:22:15.447 { 00:22:15.447 "subsystem": "sock", 00:22:15.447 "config": [ 00:22:15.447 { 00:22:15.447 "method": "sock_set_default_impl", 00:22:15.447 "params": { 00:22:15.447 "impl_name": "posix" 00:22:15.447 } 00:22:15.447 }, 00:22:15.447 { 00:22:15.447 "method": "sock_impl_set_options", 00:22:15.447 "params": { 00:22:15.447 "impl_name": "ssl", 00:22:15.447 "recv_buf_size": 4096, 00:22:15.447 "send_buf_size": 4096, 00:22:15.447 "enable_recv_pipe": true, 00:22:15.447 "enable_quickack": false, 00:22:15.447 "enable_placement_id": 0, 00:22:15.447 "enable_zerocopy_send_server": true, 00:22:15.447 "enable_zerocopy_send_client": false, 00:22:15.447 "zerocopy_threshold": 0, 00:22:15.447 "tls_version": 0, 00:22:15.447 "enable_ktls": false 00:22:15.447 } 00:22:15.447 }, 00:22:15.447 { 00:22:15.447 "method": "sock_impl_set_options", 00:22:15.447 "params": { 00:22:15.447 "impl_name": "posix", 00:22:15.447 "recv_buf_size": 2097152, 00:22:15.447 "send_buf_size": 2097152, 00:22:15.447 "enable_recv_pipe": true, 00:22:15.447 "enable_quickack": false, 00:22:15.447 "enable_placement_id": 0, 00:22:15.447 "enable_zerocopy_send_server": true, 00:22:15.447 "enable_zerocopy_send_client": false, 00:22:15.447 "zerocopy_threshold": 0, 00:22:15.447 "tls_version": 0, 00:22:15.447 "enable_ktls": false 00:22:15.447 } 00:22:15.447 } 00:22:15.447 ] 00:22:15.447 }, 00:22:15.447 { 00:22:15.447 "subsystem": "vmd", 00:22:15.447 "config": [] 00:22:15.447 }, 00:22:15.447 { 00:22:15.447 "subsystem": "accel", 00:22:15.447 "config": [ 00:22:15.447 { 00:22:15.447 "method": "accel_set_options", 00:22:15.447 "params": { 00:22:15.447 "small_cache_size": 128, 00:22:15.447 "large_cache_size": 16, 00:22:15.447 "task_count": 2048, 00:22:15.447 "sequence_count": 2048, 00:22:15.447 "buf_count": 2048 00:22:15.447 } 00:22:15.447 } 00:22:15.447 ] 00:22:15.447 }, 00:22:15.447 { 00:22:15.448 "subsystem": "bdev", 00:22:15.448 "config": [ 00:22:15.448 { 00:22:15.448 "method": "bdev_set_options", 00:22:15.448 "params": { 00:22:15.448 "bdev_io_pool_size": 65535, 00:22:15.448 "bdev_io_cache_size": 256, 00:22:15.448 "bdev_auto_examine": true, 00:22:15.448 "iobuf_small_cache_size": 128, 00:22:15.448 "iobuf_large_cache_size": 16 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_raid_set_options", 00:22:15.448 "params": { 00:22:15.448 "process_window_size_kb": 1024, 00:22:15.448 "process_max_bandwidth_mb_sec": 0 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_iscsi_set_options", 00:22:15.448 "params": { 00:22:15.448 "timeout_sec": 30 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_nvme_set_options", 00:22:15.448 "params": { 00:22:15.448 "action_on_timeout": "none", 00:22:15.448 "timeout_us": 0, 00:22:15.448 "timeout_admin_us": 0, 00:22:15.448 "keep_alive_timeout_ms": 10000, 00:22:15.448 "arbitration_burst": 0, 00:22:15.448 "low_priority_weight": 0, 00:22:15.448 "medium_priority_weight": 0, 00:22:15.448 "high_priority_weight": 0, 00:22:15.448 "nvme_adminq_poll_period_us": 10000, 00:22:15.448 "nvme_ioq_poll_period_us": 0, 00:22:15.448 "io_queue_requests": 512, 00:22:15.448 "delay_cmd_submit": true, 00:22:15.448 "transport_retry_count": 4, 00:22:15.448 "bdev_retry_count": 3, 00:22:15.448 "transport_ack_timeout": 0, 00:22:15.448 "ctrlr_loss_timeout_sec": 0, 00:22:15.448 "reconnect_delay_sec": 0, 00:22:15.448 "fast_io_fail_timeout_sec": 0, 00:22:15.448 "disable_auto_failback": false, 00:22:15.448 "generate_uuids": false, 00:22:15.448 "transport_tos": 0, 00:22:15.448 "nvme_error_stat": false, 00:22:15.448 "rdma_srq_size": 0, 00:22:15.448 "io_path_stat": false, 00:22:15.448 "allow_accel_sequence": false, 00:22:15.448 "rdma_max_cq_size": 0, 00:22:15.448 "rdma_cm_event_timeout_ms": 0, 00:22:15.448 "dhchap_digests": [ 00:22:15.448 "sha256", 00:22:15.448 "sha384", 00:22:15.448 "sha512" 00:22:15.448 ], 00:22:15.448 "dhchap_dhgroups": [ 00:22:15.448 "null", 00:22:15.448 "ffdhe2048", 00:22:15.448 "ffdhe3072", 00:22:15.448 "ffdhe4096", 00:22:15.448 "ffdhe6144", 00:22:15.448 "ffdhe8192" 00:22:15.448 ] 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_nvme_attach_controller", 00:22:15.448 "params": { 00:22:15.448 "name": "nvme0", 00:22:15.448 "trtype": "TCP", 00:22:15.448 "adrfam": "IPv4", 00:22:15.448 "traddr": "10.0.0.2", 00:22:15.448 "trsvcid": "4420", 00:22:15.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.448 "prchk_reftag": false, 00:22:15.448 "prchk_guard": false, 00:22:15.448 "ctrlr_loss_timeout_sec": 0, 00:22:15.448 "reconnect_delay_sec": 0, 00:22:15.448 "fast_io_fail_timeout_sec": 0, 00:22:15.448 "psk": "key0", 00:22:15.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.448 "hdgst": false, 00:22:15.448 "ddgst": false 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_nvme_set_hotplug", 00:22:15.448 "params": { 00:22:15.448 "period_us": 100000, 00:22:15.448 "enable": false 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_enable_histogram", 00:22:15.448 "params": { 00:22:15.448 "name": "nvme0n1", 00:22:15.448 "enable": true 00:22:15.448 } 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "method": "bdev_wait_for_examine" 00:22:15.448 } 00:22:15.448 ] 00:22:15.448 }, 00:22:15.448 { 00:22:15.448 "subsystem": "nbd", 00:22:15.448 "config": [] 00:22:15.448 } 00:22:15.448 ] 00:22:15.448 }' 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2152115 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2152115 ']' 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2152115 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2152115 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2152115' 00:22:15.448 killing process with pid 2152115 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2152115 00:22:15.448 Received shutdown signal, test time was about 1.000000 seconds 00:22:15.448 00:22:15.448 Latency(us) 00:22:15.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.448 =================================================================================================================== 00:22:15.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.448 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2152115 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2151998 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2151998 ']' 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2151998 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151998 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151998' 00:22:16.012 killing process with pid 2151998 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2151998 00:22:16.012 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2151998 00:22:16.270 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:16.270 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.270 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:16.270 "subsystems": [ 00:22:16.270 { 00:22:16.270 "subsystem": "keyring", 00:22:16.270 "config": [ 00:22:16.270 { 00:22:16.270 "method": "keyring_file_add_key", 00:22:16.270 "params": { 00:22:16.270 "name": "key0", 00:22:16.270 "path": "/tmp/tmp.lD0dPRpfsy" 00:22:16.270 } 00:22:16.270 } 00:22:16.270 ] 00:22:16.270 }, 00:22:16.270 { 00:22:16.270 "subsystem": "iobuf", 00:22:16.270 "config": [ 00:22:16.270 { 00:22:16.270 "method": "iobuf_set_options", 00:22:16.270 "params": { 00:22:16.270 "small_pool_count": 8192, 00:22:16.270 "large_pool_count": 1024, 00:22:16.270 "small_bufsize": 8192, 00:22:16.270 "large_bufsize": 135168 00:22:16.270 } 00:22:16.270 } 00:22:16.270 ] 00:22:16.270 }, 00:22:16.270 { 00:22:16.270 "subsystem": "sock", 00:22:16.270 "config": [ 00:22:16.270 { 00:22:16.270 "method": "sock_set_default_impl", 00:22:16.270 "params": { 00:22:16.270 "impl_name": "posix" 00:22:16.270 } 00:22:16.270 }, 00:22:16.270 { 00:22:16.270 "method": "sock_impl_set_options", 00:22:16.270 "params": { 00:22:16.270 "impl_name": "ssl", 00:22:16.270 "recv_buf_size": 4096, 00:22:16.270 "send_buf_size": 4096, 00:22:16.270 "enable_recv_pipe": true, 00:22:16.270 "enable_quickack": false, 00:22:16.270 "enable_placement_id": 0, 00:22:16.270 "enable_zerocopy_send_server": true, 00:22:16.270 "enable_zerocopy_send_client": false, 00:22:16.270 "zerocopy_threshold": 0, 00:22:16.270 "tls_version": 0, 00:22:16.270 "enable_ktls": false 00:22:16.270 } 00:22:16.270 }, 00:22:16.270 { 00:22:16.270 "method": "sock_impl_set_options", 00:22:16.270 "params": { 00:22:16.270 "impl_name": "posix", 00:22:16.270 "recv_buf_size": 2097152, 00:22:16.270 "send_buf_size": 2097152, 00:22:16.270 "enable_recv_pipe": true, 00:22:16.270 "enable_quickack": false, 00:22:16.270 "enable_placement_id": 0, 00:22:16.270 "enable_zerocopy_send_server": true, 00:22:16.270 "enable_zerocopy_send_client": false, 00:22:16.270 "zerocopy_threshold": 0, 00:22:16.270 "tls_version": 0, 00:22:16.271 "enable_ktls": false 00:22:16.271 } 00:22:16.271 } 00:22:16.271 ] 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "subsystem": "vmd", 00:22:16.271 "config": [] 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "subsystem": "accel", 00:22:16.271 "config": [ 00:22:16.271 { 00:22:16.271 "method": "accel_set_options", 00:22:16.271 "params": { 00:22:16.271 "small_cache_size": 128, 00:22:16.271 "large_cache_size": 16, 00:22:16.271 "task_count": 2048, 00:22:16.271 "sequence_count": 2048, 00:22:16.271 "buf_count": 2048 00:22:16.271 } 00:22:16.271 } 00:22:16.271 ] 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "subsystem": "bdev", 00:22:16.271 "config": [ 00:22:16.271 { 00:22:16.271 "method": "bdev_set_options", 00:22:16.271 "params": { 00:22:16.271 "bdev_io_pool_size": 65535, 00:22:16.271 "bdev_io_cache_size": 256, 00:22:16.271 "bdev_auto_examine": true, 00:22:16.271 "iobuf_small_cache_size": 128, 00:22:16.271 "iobuf_large_cache_size": 16 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "bdev_raid_set_options", 00:22:16.271 "params": { 00:22:16.271 "process_window_size_kb": 1024, 00:22:16.271 "process_max_bandwidth_mb_sec": 0 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "bdev_iscsi_set_options", 00:22:16.271 "params": { 00:22:16.271 "timeout_sec": 30 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "bdev_nvme_set_options", 00:22:16.271 "params": { 00:22:16.271 "action_on_timeout": "none", 00:22:16.271 "timeout_us": 0, 00:22:16.271 "timeout_admin_us": 0, 00:22:16.271 "keep_alive_timeout_ms": 10000, 00:22:16.271 "arbitration_burst": 0, 00:22:16.271 "low_priority_weight": 0, 00:22:16.271 "medium_priority_weight": 0, 00:22:16.271 "high_priority_weight": 0, 00:22:16.271 "nvme_adminq_poll_period_us": 10000, 00:22:16.271 "nvme_ioq_poll_period_us": 0, 00:22:16.271 "io_queue_requests": 0, 00:22:16.271 "delay_cmd_submit": true, 00:22:16.271 "transport_retry_count": 4, 00:22:16.271 "bdev_retry_count": 3, 00:22:16.271 "transport_ack_timeout": 0, 00:22:16.271 "ctrlr_loss_timeout_sec": 0, 00:22:16.271 "reconnect_delay_sec": 0, 00:22:16.271 "fast_io_fail_timeout_sec": 0, 00:22:16.271 "disable_auto_failback": false, 00:22:16.271 "generate_uuids": false, 00:22:16.271 "transport_tos": 0, 00:22:16.271 "nvme_error_stat": false, 00:22:16.271 "rdma_srq_size": 0, 00:22:16.271 "io_path_stat": false, 00:22:16.271 "allow_accel_sequence": false, 00:22:16.271 "rdma_max_cq_size": 0, 00:22:16.271 "rdma_cm_event_timeout_ms": 0, 00:22:16.271 "dhchap_digests": [ 00:22:16.271 "sha256", 00:22:16.271 "sha384", 00:22:16.271 "sha512" 00:22:16.271 ], 00:22:16.271 "dhchap_dhgroups": [ 00:22:16.271 "null", 00:22:16.271 "ffdhe2048", 00:22:16.271 "ffdhe3072", 00:22:16.271 "ffdhe4096", 00:22:16.271 "ffdhe6144", 00:22:16.271 "ffdhe8192" 00:22:16.271 ] 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "bdev_nvme_set_hotplug", 00:22:16.271 "params": { 00:22:16.271 "period_us": 100000, 00:22:16.271 "enable": false 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "bdev_malloc_create", 00:22:16.271 "params": { 00:22:16.271 "name": "malloc0", 00:22:16.271 "num_blocks": 8192, 00:22:16.271 "block_size": 4096, 00:22:16.271 "physical_block_size": 4096, 00:22:16.271 "uuid": "acd8a535-6c6f-400c-891f-79fbafe20add", 00:22:16.271 "optimal_io_boundary": 0, 00:22:16.271 "md_size": 0, 00:22:16.271 "dif_type": 0, 00:22:16.271 "dif_is_head_of_md": false, 00:22:16.271 "dif_pi_format": 0 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "bdev_wait_for_examine" 00:22:16.271 } 00:22:16.271 ] 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "subsystem": "nbd", 00:22:16.271 "config": [] 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "subsystem": "scheduler", 00:22:16.271 "config": [ 00:22:16.271 { 00:22:16.271 "method": "framework_set_scheduler", 00:22:16.271 "params": { 00:22:16.271 "name": "static" 00:22:16.271 } 00:22:16.271 } 00:22:16.271 ] 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "subsystem": "nvmf", 00:22:16.271 "config": [ 00:22:16.271 { 00:22:16.271 "method": "nvmf_set_config", 00:22:16.271 "params": { 00:22:16.271 "discovery_filter": "match_any", 00:22:16.271 "admin_cmd_passthru": { 00:22:16.271 "identify_ctrlr": false 00:22:16.271 } 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_set_max_subsystems", 00:22:16.271 "params": { 00:22:16.271 "max_subsystems": 1024 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_set_crdt", 00:22:16.271 "params": { 00:22:16.271 "crdt1": 0, 00:22:16.271 "crdt2": 0, 00:22:16.271 "crdt3": 0 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_create_transport", 00:22:16.271 "params": { 00:22:16.271 "trtype": "TCP", 00:22:16.271 "max_queue_depth": 128, 00:22:16.271 "max_io_qpairs_per_ctrlr": 127, 00:22:16.271 "in_capsule_data_size": 4096, 00:22:16.271 "max_io_size": 131072, 00:22:16.271 "io_unit_size": 131072, 00:22:16.271 "max_aq_depth": 128, 00:22:16.271 "num_shared_buffers": 511, 00:22:16.271 "buf_cache_size": 4294967295, 00:22:16.271 "dif_insert_or_strip": false, 00:22:16.271 "zcopy": false, 00:22:16.271 "c2h_success": false, 00:22:16.271 "sock_priority": 0, 00:22:16.271 "abort_timeout_sec": 1, 00:22:16.271 "ack_timeout": 0, 00:22:16.271 "data_wr_pool_size": 0 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_create_subsystem", 00:22:16.271 "params": { 00:22:16.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.271 "allow_any_host": false, 00:22:16.271 "serial_number": "00000000000000000000", 00:22:16.271 "model_number": "SPDK bdev Controller", 00:22:16.271 "max_namespaces": 32, 00:22:16.271 "min_cntlid": 1, 00:22:16.271 "max_cntlid": 65519, 00:22:16.271 "ana_reporting": false 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_subsystem_add_host", 00:22:16.271 "params": { 00:22:16.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.271 "host": "nqn.2016-06.io.spdk:host1", 00:22:16.271 "psk": "key0" 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_subsystem_add_ns", 00:22:16.271 "params": { 00:22:16.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.271 "namespace": { 00:22:16.271 "nsid": 1, 00:22:16.271 "bdev_name": "malloc0", 00:22:16.271 "nguid": "ACD8A5356C6F400C891F79FBAFE20ADD", 00:22:16.271 "uuid": "acd8a535-6c6f-400c-891f-79fbafe20add", 00:22:16.271 "no_auto_visible": false 00:22:16.271 } 00:22:16.271 } 00:22:16.271 }, 00:22:16.271 { 00:22:16.271 "method": "nvmf_subsystem_add_listener", 00:22:16.271 "params": { 00:22:16.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:16.271 "listen_address": { 00:22:16.271 "trtype": "TCP", 00:22:16.271 "adrfam": "IPv4", 00:22:16.271 "traddr": "10.0.0.2", 00:22:16.271 "trsvcid": "4420" 00:22:16.271 }, 00:22:16.271 "secure_channel": false, 00:22:16.271 "sock_impl": "ssl" 00:22:16.271 } 00:22:16.271 } 00:22:16.271 ] 00:22:16.271 } 00:22:16.271 ] 00:22:16.271 }' 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2152941 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2152941 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2152941 ']' 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.271 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.271 [2024-07-26 11:30:11.804745] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:16.271 [2024-07-26 11:30:11.804846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.272 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.272 [2024-07-26 11:30:11.880085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.533 [2024-07-26 11:30:12.004991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.533 [2024-07-26 11:30:12.005058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.533 [2024-07-26 11:30:12.005085] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.533 [2024-07-26 11:30:12.005106] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.533 [2024-07-26 11:30:12.005125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.533 [2024-07-26 11:30:12.005231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.791 [2024-07-26 11:30:12.259470] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.791 [2024-07-26 11:30:12.310201] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.791 [2024-07-26 11:30:12.310502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2153194 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2153194 /var/tmp/bdevperf.sock 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2153194 ']' 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.723 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.724 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:17.724 "subsystems": [ 00:22:17.724 { 00:22:17.724 "subsystem": "keyring", 00:22:17.724 "config": [ 00:22:17.724 { 00:22:17.724 "method": "keyring_file_add_key", 00:22:17.724 "params": { 00:22:17.724 "name": "key0", 00:22:17.724 "path": "/tmp/tmp.lD0dPRpfsy" 00:22:17.724 } 00:22:17.724 } 00:22:17.724 ] 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "subsystem": "iobuf", 00:22:17.724 "config": [ 00:22:17.724 { 00:22:17.724 "method": "iobuf_set_options", 00:22:17.724 "params": { 00:22:17.724 "small_pool_count": 8192, 00:22:17.724 "large_pool_count": 1024, 00:22:17.724 "small_bufsize": 8192, 00:22:17.724 "large_bufsize": 135168 00:22:17.724 } 00:22:17.724 } 00:22:17.724 ] 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "subsystem": "sock", 00:22:17.724 "config": [ 00:22:17.724 { 00:22:17.724 "method": "sock_set_default_impl", 00:22:17.724 "params": { 00:22:17.724 "impl_name": "posix" 00:22:17.724 } 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "method": "sock_impl_set_options", 00:22:17.724 "params": { 00:22:17.724 "impl_name": "ssl", 00:22:17.724 "recv_buf_size": 4096, 00:22:17.724 "send_buf_size": 4096, 00:22:17.724 "enable_recv_pipe": true, 00:22:17.724 "enable_quickack": false, 00:22:17.724 "enable_placement_id": 0, 00:22:17.724 "enable_zerocopy_send_server": true, 00:22:17.724 "enable_zerocopy_send_client": false, 00:22:17.724 "zerocopy_threshold": 0, 00:22:17.724 "tls_version": 0, 00:22:17.724 "enable_ktls": false 00:22:17.724 } 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "method": "sock_impl_set_options", 00:22:17.724 "params": { 00:22:17.724 "impl_name": "posix", 00:22:17.724 "recv_buf_size": 2097152, 00:22:17.724 "send_buf_size": 2097152, 00:22:17.724 "enable_recv_pipe": true, 00:22:17.724 "enable_quickack": false, 00:22:17.724 "enable_placement_id": 0, 00:22:17.724 "enable_zerocopy_send_server": true, 00:22:17.724 "enable_zerocopy_send_client": false, 00:22:17.724 "zerocopy_threshold": 0, 00:22:17.724 "tls_version": 0, 00:22:17.724 "enable_ktls": false 00:22:17.724 } 00:22:17.724 } 00:22:17.724 ] 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "subsystem": "vmd", 00:22:17.724 "config": [] 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "subsystem": "accel", 00:22:17.724 "config": [ 00:22:17.724 { 00:22:17.724 "method": "accel_set_options", 00:22:17.724 "params": { 00:22:17.724 "small_cache_size": 128, 00:22:17.724 "large_cache_size": 16, 00:22:17.724 "task_count": 2048, 00:22:17.724 "sequence_count": 2048, 00:22:17.724 "buf_count": 2048 00:22:17.724 } 00:22:17.724 } 00:22:17.724 ] 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "subsystem": "bdev", 00:22:17.724 "config": [ 00:22:17.724 { 00:22:17.724 "method": "bdev_set_options", 00:22:17.724 "params": { 00:22:17.724 "bdev_io_pool_size": 65535, 00:22:17.724 "bdev_io_cache_size": 256, 00:22:17.724 "bdev_auto_examine": true, 00:22:17.724 "iobuf_small_cache_size": 128, 00:22:17.724 "iobuf_large_cache_size": 16 00:22:17.724 } 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "method": "bdev_raid_set_options", 00:22:17.724 "params": { 00:22:17.724 "process_window_size_kb": 1024, 00:22:17.724 "process_max_bandwidth_mb_sec": 0 00:22:17.724 } 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "method": "bdev_iscsi_set_options", 00:22:17.724 "params": { 00:22:17.724 "timeout_sec": 30 00:22:17.724 } 00:22:17.724 }, 00:22:17.724 { 00:22:17.724 "method": "bdev_nvme_set_options", 00:22:17.724 "params": { 00:22:17.724 "action_on_timeout": "none", 00:22:17.724 "timeout_us": 0, 00:22:17.724 "timeout_admin_us": 0, 00:22:17.724 "keep_alive_timeout_ms": 10000, 00:22:17.724 "arbitration_burst": 0, 00:22:17.724 "low_priority_weight": 0, 00:22:17.724 "medium_priority_weight": 0, 00:22:17.724 "high_priority_weight": 0, 00:22:17.724 "nvme_adminq_poll_period_us": 10000, 00:22:17.724 "nvme_ioq_poll_period_us": 0, 00:22:17.724 "io_queue_requests": 512, 00:22:17.724 "delay_cmd_submit": true, 00:22:17.724 "transport_retry_count": 4, 00:22:17.724 "bdev_retry_count": 3, 00:22:17.724 "transport_ack_timeout": 0, 00:22:17.724 "ctrlr_loss_timeout_sec": 0, 00:22:17.724 "reconnect_delay_sec": 0, 00:22:17.724 "fast_io_fail_timeout_sec": 0, 00:22:17.724 "disable_auto_failback": false, 00:22:17.724 "generate_uuids": false, 00:22:17.724 "transport_tos": 0, 00:22:17.724 "nvme_error_stat": false, 00:22:17.724 "rdma_srq_size": 0, 00:22:17.725 "io_path_stat": false, 00:22:17.725 "allow_accel_sequence": false, 00:22:17.725 "rdma_max_cq_size": 0, 00:22:17.725 "rdma_cm_event_timeout_ms": 0, 00:22:17.725 "dhchap_digests": [ 00:22:17.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.725 "sha256", 00:22:17.725 "sha384", 00:22:17.725 "sha512" 00:22:17.725 ], 00:22:17.725 "dhchap_dhgroups": [ 00:22:17.725 "null", 00:22:17.725 "ffdhe2048", 00:22:17.725 "ffdhe3072", 00:22:17.725 "ffdhe4096", 00:22:17.725 "ffdhe6144", 00:22:17.725 "ffdhe8192" 00:22:17.725 ] 00:22:17.725 } 00:22:17.725 }, 00:22:17.725 { 00:22:17.725 "method": "bdev_nvme_attach_controller", 00:22:17.725 "params": { 00:22:17.725 "name": "nvme0", 00:22:17.725 "trtype": "TCP", 00:22:17.725 "adrfam": "IPv4", 00:22:17.725 "traddr": "10.0.0.2", 00:22:17.725 "trsvcid": "4420", 00:22:17.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.725 "prchk_reftag": false, 00:22:17.725 "prchk_guard": false, 00:22:17.725 "ctrlr_loss_timeout_sec": 0, 00:22:17.725 "reconnect_delay_sec": 0, 00:22:17.725 "fast_io_fail_timeout_sec": 0, 00:22:17.725 "psk": "key0", 00:22:17.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.725 "hdgst": false, 00:22:17.725 "ddgst": false 00:22:17.725 } 00:22:17.725 }, 00:22:17.725 { 00:22:17.725 "method": "bdev_nvme_set_hotplug", 00:22:17.725 "params": { 00:22:17.725 "period_us": 100000, 00:22:17.725 "enable": false 00:22:17.725 } 00:22:17.725 }, 00:22:17.725 { 00:22:17.725 "method": "bdev_enable_histogram", 00:22:17.725 "params": { 00:22:17.725 "name": "nvme0n1", 00:22:17.725 "enable": true 00:22:17.725 } 00:22:17.725 }, 00:22:17.725 { 00:22:17.725 "method": "bdev_wait_for_examine" 00:22:17.725 } 00:22:17.725 ] 00:22:17.725 }, 00:22:17.725 { 00:22:17.725 "subsystem": "nbd", 00:22:17.725 "config": [] 00:22:17.725 } 00:22:17.725 ] 00:22:17.725 }' 00:22:17.725 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.725 11:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.725 [2024-07-26 11:30:13.137973] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:17.725 [2024-07-26 11:30:13.138066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153194 ] 00:22:17.725 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.725 [2024-07-26 11:30:13.205992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.725 [2024-07-26 11:30:13.326900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.982 [2024-07-26 11:30:13.513463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:18.913 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:18.913 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:18.913 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.913 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:19.169 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.169 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.426 Running I/O for 1 seconds... 00:22:20.799 00:22:20.799 Latency(us) 00:22:20.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.799 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:20.799 Verification LBA range: start 0x0 length 0x2000 00:22:20.799 nvme0n1 : 1.04 2260.98 8.83 0.00 0.00 55531.51 11165.39 118061.89 00:22:20.799 =================================================================================================================== 00:22:20.799 Total : 2260.98 8.83 0.00 0.00 55531.51 11165.39 118061.89 00:22:20.799 0 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:20.799 nvmf_trace.0 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2153194 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2153194 ']' 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2153194 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2153194 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2153194' 00:22:20.799 killing process with pid 2153194 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2153194 00:22:20.799 Received shutdown signal, test time was about 1.000000 seconds 00:22:20.799 00:22:20.799 Latency(us) 00:22:20.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.799 =================================================================================================================== 00:22:20.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2153194 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.799 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.799 rmmod nvme_tcp 00:22:21.060 rmmod nvme_fabrics 00:22:21.060 rmmod nvme_keyring 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2152941 ']' 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2152941 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2152941 ']' 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2152941 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2152941 00:22:21.060 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.061 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.061 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2152941' 00:22:21.061 killing process with pid 2152941 00:22:21.061 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2152941 00:22:21.061 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2152941 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.377 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.282 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.282 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rXsmxjtCAA /tmp/tmp.Pz1HTDHR4X /tmp/tmp.lD0dPRpfsy 00:22:23.282 00:22:23.282 real 1m32.339s 00:22:23.282 user 2m33.264s 00:22:23.282 sys 0m30.564s 00:22:23.282 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.282 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.282 ************************************ 00:22:23.282 END TEST nvmf_tls 00:22:23.282 ************************************ 00:22:23.543 11:30:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:23.543 11:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:23.543 11:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.543 11:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.543 ************************************ 00:22:23.543 START TEST nvmf_fips 00:22:23.543 ************************************ 00:22:23.543 11:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:23.543 * Looking for test storage... 00:22:23.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:23.543 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:23.544 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:23.804 Error setting digest 00:22:23.804 00527408667F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:23.804 00527408667F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.804 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:26.341 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:26.341 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:26.341 Found net devices under 0000:84:00.0: cvl_0_0 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:26.341 Found net devices under 0000:84:00.1: cvl_0_1 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.341 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:22:26.342 00:22:26.342 --- 10.0.0.2 ping statistics --- 00:22:26.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.342 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:26.342 00:22:26.342 --- 10.0.0.1 ping statistics --- 00:22:26.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.342 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.342 11:30:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.601 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:26.601 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2155608 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2155608 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2155608 ']' 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.602 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:26.602 [2024-07-26 11:30:22.128855] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:26.602 [2024-07-26 11:30:22.128942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.602 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.602 [2024-07-26 11:30:22.218783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.860 [2024-07-26 11:30:22.369163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.860 [2024-07-26 11:30:22.369236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.860 [2024-07-26 11:30:22.369258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.860 [2024-07-26 11:30:22.369277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.860 [2024-07-26 11:30:22.369292] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.860 [2024-07-26 11:30:22.369338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.860 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.860 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:26.860 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.860 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.860 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:27.119 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.377 [2024-07-26 11:30:22.864601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.377 [2024-07-26 11:30:22.880559] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.378 [2024-07-26 11:30:22.880827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.378 [2024-07-26 11:30:22.915217] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:27.378 malloc0 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2155752 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2155752 /var/tmp/bdevperf.sock 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2155752 ']' 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.378 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.378 [2024-07-26 11:30:23.033679] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:27.378 [2024-07-26 11:30:23.033795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155752 ] 00:22:27.636 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.637 [2024-07-26 11:30:23.114193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.637 [2024-07-26 11:30:23.254143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.900 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.900 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:27.900 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.165 [2024-07-26 11:30:23.712043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.165 [2024-07-26 11:30:23.712217] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:28.165 TLSTESTn1 00:22:28.165 11:30:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.422 Running I/O for 10 seconds... 00:22:40.626 00:22:40.626 Latency(us) 00:22:40.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.626 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:40.626 Verification LBA range: start 0x0 length 0x2000 00:22:40.626 TLSTESTn1 : 10.04 2602.21 10.16 0.00 0.00 49072.62 8058.50 88934.78 00:22:40.626 =================================================================================================================== 00:22:40.626 Total : 2602.21 10.16 0.00 0.00 49072.62 8058.50 88934.78 00:22:40.626 0 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:40.626 nvmf_trace.0 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2155752 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2155752 ']' 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2155752 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2155752 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2155752' 00:22:40.626 killing process with pid 2155752 00:22:40.626 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2155752 00:22:40.626 Received shutdown signal, test time was about 10.000000 seconds 00:22:40.626 00:22:40.626 Latency(us) 00:22:40.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.626 =================================================================================================================== 00:22:40.626 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.627 [2024-07-26 11:30:34.214786] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2155752 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:40.627 rmmod nvme_tcp 00:22:40.627 rmmod nvme_fabrics 00:22:40.627 rmmod nvme_keyring 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2155608 ']' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2155608 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2155608 ']' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2155608 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2155608 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2155608' 00:22:40.627 killing process with pid 2155608 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2155608 00:22:40.627 [2024-07-26 11:30:34.632600] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2155608 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.627 11:30:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.565 00:22:41.565 real 0m18.040s 00:22:41.565 user 0m22.762s 00:22:41.565 sys 0m6.521s 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.565 ************************************ 00:22:41.565 END TEST nvmf_fips 00:22:41.565 ************************************ 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.565 11:30:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.099 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:44.100 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:44.100 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:44.100 Found net devices under 0000:84:00.0: cvl_0_0 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:44.100 Found net devices under 0000:84:00.1: cvl_0_1 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.100 ************************************ 00:22:44.100 START TEST nvmf_perf_adq 00:22:44.100 ************************************ 00:22:44.100 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:44.359 * Looking for test storage... 00:22:44.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.359 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.360 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:46.930 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:46.930 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.930 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:46.931 Found net devices under 0000:84:00.0: cvl_0_0 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:46.931 Found net devices under 0000:84:00.1: cvl_0_1 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:46.931 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:47.497 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:49.401 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:54.672 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:54.672 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:54.672 Found net devices under 0000:84:00.0: cvl_0_0 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:54.672 Found net devices under 0000:84:00.1: cvl_0_1 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.672 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.673 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:22:54.673 00:22:54.673 --- 10.0.0.2 ping statistics --- 00:22:54.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.673 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:54.673 00:22:54.673 --- 10.0.0.1 ping statistics --- 00:22:54.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.673 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2161666 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2161666 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2161666 ']' 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.673 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.673 [2024-07-26 11:30:50.180108] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:22:54.673 [2024-07-26 11:30:50.180279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.673 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.673 [2024-07-26 11:30:50.288558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.931 [2024-07-26 11:30:50.416796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.931 [2024-07-26 11:30:50.416864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.931 [2024-07-26 11:30:50.416881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.931 [2024-07-26 11:30:50.416895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.931 [2024-07-26 11:30:50.416906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.931 [2024-07-26 11:30:50.417004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.931 [2024-07-26 11:30:50.417066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.931 [2024-07-26 11:30:50.417139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:54.931 [2024-07-26 11:30:50.417141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.931 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 [2024-07-26 11:30:50.689693] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 Malloc1 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 [2024-07-26 11:30:50.741959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2161823 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:55.190 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:55.190 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.139 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:57.139 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.139 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:57.139 "tick_rate": 2700000000, 00:22:57.139 "poll_groups": [ 00:22:57.139 { 00:22:57.139 "name": "nvmf_tgt_poll_group_000", 00:22:57.139 "admin_qpairs": 1, 00:22:57.139 "io_qpairs": 1, 00:22:57.139 "current_admin_qpairs": 1, 00:22:57.139 "current_io_qpairs": 1, 00:22:57.139 "pending_bdev_io": 0, 00:22:57.139 "completed_nvme_io": 18292, 00:22:57.139 "transports": [ 00:22:57.139 { 00:22:57.139 "trtype": "TCP" 00:22:57.139 } 00:22:57.139 ] 00:22:57.139 }, 00:22:57.139 { 00:22:57.139 "name": "nvmf_tgt_poll_group_001", 00:22:57.139 "admin_qpairs": 0, 00:22:57.139 "io_qpairs": 1, 00:22:57.139 "current_admin_qpairs": 0, 00:22:57.139 "current_io_qpairs": 1, 00:22:57.139 "pending_bdev_io": 0, 00:22:57.139 "completed_nvme_io": 18514, 00:22:57.139 "transports": [ 00:22:57.140 { 00:22:57.140 "trtype": "TCP" 00:22:57.140 } 00:22:57.140 ] 00:22:57.140 }, 00:22:57.140 { 00:22:57.140 "name": "nvmf_tgt_poll_group_002", 00:22:57.140 "admin_qpairs": 0, 00:22:57.140 "io_qpairs": 1, 00:22:57.140 "current_admin_qpairs": 0, 00:22:57.140 "current_io_qpairs": 1, 00:22:57.140 "pending_bdev_io": 0, 00:22:57.140 "completed_nvme_io": 18784, 00:22:57.140 "transports": [ 00:22:57.140 { 00:22:57.140 "trtype": "TCP" 00:22:57.140 } 00:22:57.140 ] 00:22:57.140 }, 00:22:57.140 { 00:22:57.140 "name": "nvmf_tgt_poll_group_003", 00:22:57.140 "admin_qpairs": 0, 00:22:57.140 "io_qpairs": 1, 00:22:57.140 "current_admin_qpairs": 0, 00:22:57.140 "current_io_qpairs": 1, 00:22:57.140 "pending_bdev_io": 0, 00:22:57.140 "completed_nvme_io": 18005, 00:22:57.140 "transports": [ 00:22:57.140 { 00:22:57.140 "trtype": "TCP" 00:22:57.140 } 00:22:57.140 ] 00:22:57.140 } 00:22:57.140 ] 00:22:57.140 }' 00:22:57.140 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:57.140 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:57.398 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:57.398 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:57.398 11:30:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2161823 00:23:05.505 Initializing NVMe Controllers 00:23:05.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:05.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:05.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:05.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:05.505 Initialization complete. Launching workers. 00:23:05.505 ======================================================== 00:23:05.505 Latency(us) 00:23:05.505 Device Information : IOPS MiB/s Average min max 00:23:05.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9427.50 36.83 6790.40 2532.38 10102.67 00:23:05.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9716.90 37.96 6589.16 4208.01 8239.17 00:23:05.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9835.70 38.42 6508.28 2801.78 10353.84 00:23:05.506 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9578.50 37.42 6682.69 2514.80 9873.10 00:23:05.506 ======================================================== 00:23:05.506 Total : 38558.60 150.62 6640.97 2514.80 10353.84 00:23:05.506 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.506 rmmod nvme_tcp 00:23:05.506 rmmod nvme_fabrics 00:23:05.506 rmmod nvme_keyring 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2161666 ']' 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2161666 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2161666 ']' 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2161666 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2161666 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2161666' 00:23:05.506 killing process with pid 2161666 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2161666 00:23:05.506 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2161666 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.765 11:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.681 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:07.938 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:07.938 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:08.504 11:31:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:10.438 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:15.710 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:15.710 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:15.710 Found net devices under 0000:84:00.0: cvl_0_0 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:15.710 Found net devices under 0000:84:00.1: cvl_0_1 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.710 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.711 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.711 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.711 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.711 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:23:15.711 00:23:15.711 --- 10.0.0.2 ping statistics --- 00:23:15.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.711 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:23:15.711 00:23:15.711 --- 10.0.0.1 ping statistics --- 00:23:15.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.711 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:15.711 net.core.busy_poll = 1 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:15.711 net.core.busy_read = 1 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2164325 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2164325 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2164325 ']' 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.711 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.711 [2024-07-26 11:31:11.322740] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:15.711 [2024-07-26 11:31:11.322853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.711 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.970 [2024-07-26 11:31:11.406406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.970 [2024-07-26 11:31:11.529724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.970 [2024-07-26 11:31:11.529790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.970 [2024-07-26 11:31:11.529807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.970 [2024-07-26 11:31:11.529820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.970 [2024-07-26 11:31:11.529832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.970 [2024-07-26 11:31:11.529915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.970 [2024-07-26 11:31:11.529968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.970 [2024-07-26 11:31:11.530039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.970 [2024-07-26 11:31:11.530043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.970 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 [2024-07-26 11:31:11.826275] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 Malloc1 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.229 [2024-07-26 11:31:11.880642] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2164460 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:16.229 11:31:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:16.487 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:18.388 "tick_rate": 2700000000, 00:23:18.388 "poll_groups": [ 00:23:18.388 { 00:23:18.388 "name": "nvmf_tgt_poll_group_000", 00:23:18.388 "admin_qpairs": 1, 00:23:18.388 "io_qpairs": 3, 00:23:18.388 "current_admin_qpairs": 1, 00:23:18.388 "current_io_qpairs": 3, 00:23:18.388 "pending_bdev_io": 0, 00:23:18.388 "completed_nvme_io": 24144, 00:23:18.388 "transports": [ 00:23:18.388 { 00:23:18.388 "trtype": "TCP" 00:23:18.388 } 00:23:18.388 ] 00:23:18.388 }, 00:23:18.388 { 00:23:18.388 "name": "nvmf_tgt_poll_group_001", 00:23:18.388 "admin_qpairs": 0, 00:23:18.388 "io_qpairs": 1, 00:23:18.388 "current_admin_qpairs": 0, 00:23:18.388 "current_io_qpairs": 1, 00:23:18.388 "pending_bdev_io": 0, 00:23:18.388 "completed_nvme_io": 22665, 00:23:18.388 "transports": [ 00:23:18.388 { 00:23:18.388 "trtype": "TCP" 00:23:18.388 } 00:23:18.388 ] 00:23:18.388 }, 00:23:18.388 { 00:23:18.388 "name": "nvmf_tgt_poll_group_002", 00:23:18.388 "admin_qpairs": 0, 00:23:18.388 "io_qpairs": 0, 00:23:18.388 "current_admin_qpairs": 0, 00:23:18.388 "current_io_qpairs": 0, 00:23:18.388 "pending_bdev_io": 0, 00:23:18.388 "completed_nvme_io": 0, 00:23:18.388 "transports": [ 00:23:18.388 { 00:23:18.388 "trtype": "TCP" 00:23:18.388 } 00:23:18.388 ] 00:23:18.388 }, 00:23:18.388 { 00:23:18.388 "name": "nvmf_tgt_poll_group_003", 00:23:18.388 "admin_qpairs": 0, 00:23:18.388 "io_qpairs": 0, 00:23:18.388 "current_admin_qpairs": 0, 00:23:18.388 "current_io_qpairs": 0, 00:23:18.388 "pending_bdev_io": 0, 00:23:18.388 "completed_nvme_io": 0, 00:23:18.388 "transports": [ 00:23:18.388 { 00:23:18.388 "trtype": "TCP" 00:23:18.388 } 00:23:18.388 ] 00:23:18.388 } 00:23:18.388 ] 00:23:18.388 }' 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:18.388 11:31:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2164460 00:23:26.503 Initializing NVMe Controllers 00:23:26.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:26.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:26.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:26.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:26.503 Initialization complete. Launching workers. 00:23:26.503 ======================================================== 00:23:26.503 Latency(us) 00:23:26.503 Device Information : IOPS MiB/s Average min max 00:23:26.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11946.85 46.67 5357.03 1921.90 7977.93 00:23:26.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4099.75 16.01 15626.11 2884.90 63423.33 00:23:26.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4132.35 16.14 15500.75 3185.08 62256.62 00:23:26.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4520.04 17.66 14161.43 2105.24 63155.27 00:23:26.503 ======================================================== 00:23:26.503 Total : 24699.00 96.48 10369.96 1921.90 63423.33 00:23:26.503 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.503 rmmod nvme_tcp 00:23:26.503 rmmod nvme_fabrics 00:23:26.503 rmmod nvme_keyring 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2164325 ']' 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2164325 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2164325 ']' 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2164325 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2164325 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2164325' 00:23:26.503 killing process with pid 2164325 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2164325 00:23:26.503 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2164325 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.069 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:30.355 00:23:30.355 real 0m45.786s 00:23:30.355 user 2m41.294s 00:23:30.355 sys 0m10.033s 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:30.355 ************************************ 00:23:30.355 END TEST nvmf_perf_adq 00:23:30.355 ************************************ 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.355 ************************************ 00:23:30.355 START TEST nvmf_shutdown 00:23:30.355 ************************************ 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:30.355 * Looking for test storage... 00:23:30.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:30.355 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:30.356 ************************************ 00:23:30.356 START TEST nvmf_shutdown_tc1 00:23:30.356 ************************************ 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.356 11:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:32.891 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:32.891 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:32.891 Found net devices under 0000:84:00.0: cvl_0_0 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:32.891 Found net devices under 0000:84:00.1: cvl_0_1 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.891 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:23:32.892 00:23:32.892 --- 10.0.0.2 ping statistics --- 00:23:32.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.892 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:23:32.892 00:23:32.892 --- 10.0.0.1 ping statistics --- 00:23:32.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.892 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2167759 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2167759 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2167759 ']' 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.892 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:32.892 [2024-07-26 11:31:28.275497] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:32.892 [2024-07-26 11:31:28.275582] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.892 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.892 [2024-07-26 11:31:28.393546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.892 [2024-07-26 11:31:28.539348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.892 [2024-07-26 11:31:28.539416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.892 [2024-07-26 11:31:28.539446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.892 [2024-07-26 11:31:28.539463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.892 [2024-07-26 11:31:28.539500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.892 [2024-07-26 11:31:28.539558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.892 [2024-07-26 11:31:28.539624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.892 [2024-07-26 11:31:28.539650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:32.892 [2024-07-26 11:31:28.539654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.151 [2024-07-26 11:31:28.719741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.151 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.152 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.152 Malloc1 00:23:33.152 [2024-07-26 11:31:28.809555] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.411 Malloc2 00:23:33.411 Malloc3 00:23:33.411 Malloc4 00:23:33.411 Malloc5 00:23:33.411 Malloc6 00:23:33.670 Malloc7 00:23:33.670 Malloc8 00:23:33.670 Malloc9 00:23:33.670 Malloc10 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2167942 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2167942 /var/tmp/bdevperf.sock 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2167942 ']' 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.670 { 00:23:33.670 "params": { 00:23:33.670 "name": "Nvme$subsystem", 00:23:33.670 "trtype": "$TEST_TRANSPORT", 00:23:33.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.670 "adrfam": "ipv4", 00:23:33.670 "trsvcid": "$NVMF_PORT", 00:23:33.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.670 "hdgst": ${hdgst:-false}, 00:23:33.670 "ddgst": ${ddgst:-false} 00:23:33.670 }, 00:23:33.670 "method": "bdev_nvme_attach_controller" 00:23:33.670 } 00:23:33.670 EOF 00:23:33.670 )") 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.670 { 00:23:33.670 "params": { 00:23:33.670 "name": "Nvme$subsystem", 00:23:33.670 "trtype": "$TEST_TRANSPORT", 00:23:33.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.670 "adrfam": "ipv4", 00:23:33.670 "trsvcid": "$NVMF_PORT", 00:23:33.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.670 "hdgst": ${hdgst:-false}, 00:23:33.670 "ddgst": ${ddgst:-false} 00:23:33.670 }, 00:23:33.670 "method": "bdev_nvme_attach_controller" 00:23:33.670 } 00:23:33.670 EOF 00:23:33.670 )") 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.670 { 00:23:33.670 "params": { 00:23:33.670 "name": "Nvme$subsystem", 00:23:33.670 "trtype": "$TEST_TRANSPORT", 00:23:33.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.670 "adrfam": "ipv4", 00:23:33.670 "trsvcid": "$NVMF_PORT", 00:23:33.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.670 "hdgst": ${hdgst:-false}, 00:23:33.670 "ddgst": ${ddgst:-false} 00:23:33.670 }, 00:23:33.670 "method": "bdev_nvme_attach_controller" 00:23:33.670 } 00:23:33.670 EOF 00:23:33.670 )") 00:23:33.670 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.930 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.930 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.930 { 00:23:33.930 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.931 { 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.931 { 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.931 { 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.931 { 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.931 { 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:33.931 { 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme$subsystem", 00:23:33.931 "trtype": "$TEST_TRANSPORT", 00:23:33.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "$NVMF_PORT", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.931 "hdgst": ${hdgst:-false}, 00:23:33.931 "ddgst": ${ddgst:-false} 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 } 00:23:33.931 EOF 00:23:33.931 )") 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:33.931 11:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme1", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "4420", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.931 "hdgst": false, 00:23:33.931 "ddgst": false 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 },{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme2", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "4420", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.931 "hdgst": false, 00:23:33.931 "ddgst": false 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 },{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme3", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "4420", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.931 "hdgst": false, 00:23:33.931 "ddgst": false 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 },{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme4", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "4420", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.931 "hdgst": false, 00:23:33.931 "ddgst": false 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 },{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme5", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "4420", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.931 "hdgst": false, 00:23:33.931 "ddgst": false 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 },{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme6", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.931 "adrfam": "ipv4", 00:23:33.931 "trsvcid": "4420", 00:23:33.931 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.931 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.931 "hdgst": false, 00:23:33.931 "ddgst": false 00:23:33.931 }, 00:23:33.931 "method": "bdev_nvme_attach_controller" 00:23:33.931 },{ 00:23:33.931 "params": { 00:23:33.931 "name": "Nvme7", 00:23:33.931 "trtype": "tcp", 00:23:33.931 "traddr": "10.0.0.2", 00:23:33.932 "adrfam": "ipv4", 00:23:33.932 "trsvcid": "4420", 00:23:33.932 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.932 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.932 "hdgst": false, 00:23:33.932 "ddgst": false 00:23:33.932 }, 00:23:33.932 "method": "bdev_nvme_attach_controller" 00:23:33.932 },{ 00:23:33.932 "params": { 00:23:33.932 "name": "Nvme8", 00:23:33.932 "trtype": "tcp", 00:23:33.932 "traddr": "10.0.0.2", 00:23:33.932 "adrfam": "ipv4", 00:23:33.932 "trsvcid": "4420", 00:23:33.932 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.932 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.932 "hdgst": false, 00:23:33.932 "ddgst": false 00:23:33.932 }, 00:23:33.932 "method": "bdev_nvme_attach_controller" 00:23:33.932 },{ 00:23:33.932 "params": { 00:23:33.932 "name": "Nvme9", 00:23:33.932 "trtype": "tcp", 00:23:33.932 "traddr": "10.0.0.2", 00:23:33.932 "adrfam": "ipv4", 00:23:33.932 "trsvcid": "4420", 00:23:33.932 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.932 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.932 "hdgst": false, 00:23:33.932 "ddgst": false 00:23:33.932 }, 00:23:33.932 "method": "bdev_nvme_attach_controller" 00:23:33.932 },{ 00:23:33.932 "params": { 00:23:33.932 "name": "Nvme10", 00:23:33.932 "trtype": "tcp", 00:23:33.932 "traddr": "10.0.0.2", 00:23:33.932 "adrfam": "ipv4", 00:23:33.932 "trsvcid": "4420", 00:23:33.932 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.932 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.932 "hdgst": false, 00:23:33.932 "ddgst": false 00:23:33.932 }, 00:23:33.932 "method": "bdev_nvme_attach_controller" 00:23:33.932 }' 00:23:33.932 [2024-07-26 11:31:29.371647] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:33.932 [2024-07-26 11:31:29.371755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:33.932 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.932 [2024-07-26 11:31:29.447611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.932 [2024-07-26 11:31:29.569514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2167942 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:35.881 11:31:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:37.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2167942 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:37.254 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2167759 00:23:37.254 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.255 } 00:23:37.255 EOF 00:23:37.255 )") 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.255 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.255 { 00:23:37.255 "params": { 00:23:37.255 "name": "Nvme$subsystem", 00:23:37.255 "trtype": "$TEST_TRANSPORT", 00:23:37.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.255 "adrfam": "ipv4", 00:23:37.255 "trsvcid": "$NVMF_PORT", 00:23:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.255 "hdgst": ${hdgst:-false}, 00:23:37.255 "ddgst": ${ddgst:-false} 00:23:37.255 }, 00:23:37.255 "method": "bdev_nvme_attach_controller" 00:23:37.256 } 00:23:37.256 EOF 00:23:37.256 )") 00:23:37.256 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:37.256 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:37.256 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:37.256 11:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme1", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme2", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme3", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme4", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme5", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme6", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme7", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme8", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme9", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 },{ 00:23:37.256 "params": { 00:23:37.256 "name": "Nvme10", 00:23:37.256 "trtype": "tcp", 00:23:37.256 "traddr": "10.0.0.2", 00:23:37.256 "adrfam": "ipv4", 00:23:37.256 "trsvcid": "4420", 00:23:37.256 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:37.256 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:37.256 "hdgst": false, 00:23:37.256 "ddgst": false 00:23:37.256 }, 00:23:37.256 "method": "bdev_nvme_attach_controller" 00:23:37.256 }' 00:23:37.256 [2024-07-26 11:31:32.573864] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:37.256 [2024-07-26 11:31:32.573956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168362 ] 00:23:37.256 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.256 [2024-07-26 11:31:32.642208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.256 [2024-07-26 11:31:32.766963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.625 Running I/O for 1 seconds... 00:23:39.998 00:23:39.998 Latency(us) 00:23:39.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.998 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme1n1 : 1.11 193.04 12.07 0.00 0.00 308411.19 16505.36 285834.05 00:23:39.998 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme2n1 : 1.17 219.69 13.73 0.00 0.00 283057.68 21456.97 276513.37 00:23:39.998 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme3n1 : 1.16 224.67 14.04 0.00 0.00 269795.35 19320.98 264085.81 00:23:39.998 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme4n1 : 1.16 220.97 13.81 0.00 0.00 271462.40 33399.09 274959.93 00:23:39.998 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme5n1 : 1.12 171.42 10.71 0.00 0.00 342230.66 21262.79 304475.40 00:23:39.998 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme6n1 : 1.19 215.49 13.47 0.00 0.00 268336.92 20291.89 282727.16 00:23:39.998 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme7n1 : 1.18 216.63 13.54 0.00 0.00 262014.86 22913.33 281173.71 00:23:39.998 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme8n1 : 1.18 221.27 13.83 0.00 0.00 251198.37 1601.99 307582.29 00:23:39.998 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme9n1 : 1.18 216.14 13.51 0.00 0.00 252901.45 38253.61 268746.15 00:23:39.998 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.998 Verification LBA range: start 0x0 length 0x400 00:23:39.998 Nvme10n1 : 1.20 214.13 13.38 0.00 0.00 250944.47 20000.62 316902.97 00:23:39.998 =================================================================================================================== 00:23:39.998 Total : 2113.46 132.09 0.00 0.00 273710.19 1601.99 316902.97 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.257 rmmod nvme_tcp 00:23:40.257 rmmod nvme_fabrics 00:23:40.257 rmmod nvme_keyring 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2167759 ']' 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2167759 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2167759 ']' 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2167759 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2167759 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2167759' 00:23:40.257 killing process with pid 2167759 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2167759 00:23:40.257 11:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2167759 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.823 11:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.355 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.356 00:23:43.356 real 0m12.783s 00:23:43.356 user 0m36.506s 00:23:43.356 sys 0m3.685s 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 ************************************ 00:23:43.356 END TEST nvmf_shutdown_tc1 00:23:43.356 ************************************ 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 ************************************ 00:23:43.356 START TEST nvmf_shutdown_tc2 00:23:43.356 ************************************ 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:43.356 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:43.356 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:43.356 Found net devices under 0000:84:00.0: cvl_0_0 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:43.356 Found net devices under 0000:84:00.1: cvl_0_1 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.356 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:23:43.357 00:23:43.357 --- 10.0.0.2 ping statistics --- 00:23:43.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.357 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:43.357 00:23:43.357 --- 10.0.0.1 ping statistics --- 00:23:43.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.357 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2169129 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2169129 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2169129 ']' 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.357 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 [2024-07-26 11:31:38.801885] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:43.357 [2024-07-26 11:31:38.802023] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.357 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.357 [2024-07-26 11:31:38.916161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.616 [2024-07-26 11:31:39.062422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.616 [2024-07-26 11:31:39.062507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.616 [2024-07-26 11:31:39.062528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.616 [2024-07-26 11:31:39.062544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.616 [2024-07-26 11:31:39.062559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.616 [2024-07-26 11:31:39.062667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.616 [2024-07-26 11:31:39.062733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.616 [2024-07-26 11:31:39.062775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:43.616 [2024-07-26 11:31:39.062778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.616 [2024-07-26 11:31:39.255593] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.616 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.875 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.875 Malloc1 00:23:43.875 [2024-07-26 11:31:39.349294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.875 Malloc2 00:23:43.875 Malloc3 00:23:43.875 Malloc4 00:23:44.134 Malloc5 00:23:44.134 Malloc6 00:23:44.134 Malloc7 00:23:44.134 Malloc8 00:23:44.134 Malloc9 00:23:44.393 Malloc10 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2169319 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2169319 /var/tmp/bdevperf.sock 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2169319 ']' 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.393 { 00:23:44.393 "params": { 00:23:44.393 "name": "Nvme$subsystem", 00:23:44.393 "trtype": "$TEST_TRANSPORT", 00:23:44.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.393 "adrfam": "ipv4", 00:23:44.393 "trsvcid": "$NVMF_PORT", 00:23:44.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.393 "hdgst": ${hdgst:-false}, 00:23:44.393 "ddgst": ${ddgst:-false} 00:23:44.393 }, 00:23:44.393 "method": "bdev_nvme_attach_controller" 00:23:44.393 } 00:23:44.393 EOF 00:23:44.393 )") 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.393 { 00:23:44.393 "params": { 00:23:44.393 "name": "Nvme$subsystem", 00:23:44.393 "trtype": "$TEST_TRANSPORT", 00:23:44.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.393 "adrfam": "ipv4", 00:23:44.393 "trsvcid": "$NVMF_PORT", 00:23:44.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.393 "hdgst": ${hdgst:-false}, 00:23:44.393 "ddgst": ${ddgst:-false} 00:23:44.393 }, 00:23:44.393 "method": "bdev_nvme_attach_controller" 00:23:44.393 } 00:23:44.393 EOF 00:23:44.393 )") 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.393 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.393 { 00:23:44.393 "params": { 00:23:44.393 "name": "Nvme$subsystem", 00:23:44.393 "trtype": "$TEST_TRANSPORT", 00:23:44.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.393 "adrfam": "ipv4", 00:23:44.393 "trsvcid": "$NVMF_PORT", 00:23:44.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.394 { 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme$subsystem", 00:23:44.394 "trtype": "$TEST_TRANSPORT", 00:23:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "$NVMF_PORT", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.394 "hdgst": ${hdgst:-false}, 00:23:44.394 "ddgst": ${ddgst:-false} 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 } 00:23:44.394 EOF 00:23:44.394 )") 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:44.394 11:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme1", 00:23:44.394 "trtype": "tcp", 00:23:44.394 "traddr": "10.0.0.2", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "4420", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.394 "hdgst": false, 00:23:44.394 "ddgst": false 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 },{ 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme2", 00:23:44.394 "trtype": "tcp", 00:23:44.394 "traddr": "10.0.0.2", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "4420", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.394 "hdgst": false, 00:23:44.394 "ddgst": false 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 },{ 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme3", 00:23:44.394 "trtype": "tcp", 00:23:44.394 "traddr": "10.0.0.2", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "4420", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:44.394 "hdgst": false, 00:23:44.394 "ddgst": false 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 },{ 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme4", 00:23:44.394 "trtype": "tcp", 00:23:44.394 "traddr": "10.0.0.2", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "4420", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:44.394 "hdgst": false, 00:23:44.394 "ddgst": false 00:23:44.394 }, 00:23:44.394 "method": "bdev_nvme_attach_controller" 00:23:44.394 },{ 00:23:44.394 "params": { 00:23:44.394 "name": "Nvme5", 00:23:44.394 "trtype": "tcp", 00:23:44.394 "traddr": "10.0.0.2", 00:23:44.394 "adrfam": "ipv4", 00:23:44.394 "trsvcid": "4420", 00:23:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:44.394 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:44.394 "hdgst": false, 00:23:44.394 "ddgst": false 00:23:44.395 }, 00:23:44.395 "method": "bdev_nvme_attach_controller" 00:23:44.395 },{ 00:23:44.395 "params": { 00:23:44.395 "name": "Nvme6", 00:23:44.395 "trtype": "tcp", 00:23:44.395 "traddr": "10.0.0.2", 00:23:44.395 "adrfam": "ipv4", 00:23:44.395 "trsvcid": "4420", 00:23:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:44.395 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:44.395 "hdgst": false, 00:23:44.395 "ddgst": false 00:23:44.395 }, 00:23:44.395 "method": "bdev_nvme_attach_controller" 00:23:44.395 },{ 00:23:44.395 "params": { 00:23:44.395 "name": "Nvme7", 00:23:44.395 "trtype": "tcp", 00:23:44.395 "traddr": "10.0.0.2", 00:23:44.395 "adrfam": "ipv4", 00:23:44.395 "trsvcid": "4420", 00:23:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:44.395 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:44.395 "hdgst": false, 00:23:44.395 "ddgst": false 00:23:44.395 }, 00:23:44.395 "method": "bdev_nvme_attach_controller" 00:23:44.395 },{ 00:23:44.395 "params": { 00:23:44.395 "name": "Nvme8", 00:23:44.395 "trtype": "tcp", 00:23:44.395 "traddr": "10.0.0.2", 00:23:44.395 "adrfam": "ipv4", 00:23:44.395 "trsvcid": "4420", 00:23:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:44.395 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:44.395 "hdgst": false, 00:23:44.395 "ddgst": false 00:23:44.395 }, 00:23:44.395 "method": "bdev_nvme_attach_controller" 00:23:44.395 },{ 00:23:44.395 "params": { 00:23:44.395 "name": "Nvme9", 00:23:44.395 "trtype": "tcp", 00:23:44.395 "traddr": "10.0.0.2", 00:23:44.395 "adrfam": "ipv4", 00:23:44.395 "trsvcid": "4420", 00:23:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:44.395 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:44.395 "hdgst": false, 00:23:44.395 "ddgst": false 00:23:44.395 }, 00:23:44.395 "method": "bdev_nvme_attach_controller" 00:23:44.395 },{ 00:23:44.395 "params": { 00:23:44.395 "name": "Nvme10", 00:23:44.395 "trtype": "tcp", 00:23:44.395 "traddr": "10.0.0.2", 00:23:44.395 "adrfam": "ipv4", 00:23:44.395 "trsvcid": "4420", 00:23:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:44.395 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:44.395 "hdgst": false, 00:23:44.395 "ddgst": false 00:23:44.395 }, 00:23:44.395 "method": "bdev_nvme_attach_controller" 00:23:44.395 }' 00:23:44.395 [2024-07-26 11:31:39.914165] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:44.395 [2024-07-26 11:31:39.914254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169319 ] 00:23:44.395 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.395 [2024-07-26 11:31:39.984421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.653 [2024-07-26 11:31:40.111543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.064 Running I/O for 10 seconds... 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:46.322 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:46.323 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:46.323 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.323 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.323 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.323 11:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.581 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.581 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:46.581 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:46.581 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.839 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:46.840 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:46.840 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2169319 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2169319 ']' 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2169319 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169319 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169319' 00:23:47.098 killing process with pid 2169319 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2169319 00:23:47.098 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2169319 00:23:47.356 Received shutdown signal, test time was about 1.032069 seconds 00:23:47.356 00:23:47.356 Latency(us) 00:23:47.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.356 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme1n1 : 0.99 198.98 12.44 0.00 0.00 316079.43 4805.97 273406.48 00:23:47.356 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme2n1 : 1.03 248.26 15.52 0.00 0.00 249715.86 20971.52 281173.71 00:23:47.356 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme3n1 : 1.02 250.71 15.67 0.00 0.00 242140.73 18350.08 287387.50 00:23:47.356 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme4n1 : 1.03 249.68 15.61 0.00 0.00 238045.87 20291.89 276513.37 00:23:47.356 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme5n1 : 0.99 193.37 12.09 0.00 0.00 300062.85 27962.03 282727.16 00:23:47.356 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme6n1 : 1.00 191.81 11.99 0.00 0.00 294260.69 21942.42 278066.82 00:23:47.356 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme7n1 : 0.97 197.33 12.33 0.00 0.00 280406.09 37671.06 288940.94 00:23:47.356 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.356 Nvme8n1 : 1.00 192.11 12.01 0.00 0.00 282700.61 18350.08 288940.94 00:23:47.356 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.356 Verification LBA range: start 0x0 length 0x400 00:23:47.357 Nvme9n1 : 1.01 190.05 11.88 0.00 0.00 279629.62 21845.33 290494.39 00:23:47.357 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.357 Verification LBA range: start 0x0 length 0x400 00:23:47.357 Nvme10n1 : 1.02 189.11 11.82 0.00 0.00 274777.88 20874.43 313796.08 00:23:47.357 =================================================================================================================== 00:23:47.357 Total : 2101.41 131.34 0.00 0.00 272910.89 4805.97 313796.08 00:23:47.615 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2169129 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.547 rmmod nvme_tcp 00:23:48.547 rmmod nvme_fabrics 00:23:48.547 rmmod nvme_keyring 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2169129 ']' 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2169129 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2169129 ']' 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2169129 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169129 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169129' 00:23:48.547 killing process with pid 2169129 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2169129 00:23:48.547 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2169129 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.483 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.420 00:23:51.420 real 0m8.347s 00:23:51.420 user 0m25.621s 00:23:51.420 sys 0m1.655s 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 ************************************ 00:23:51.420 END TEST nvmf_shutdown_tc2 00:23:51.420 ************************************ 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 ************************************ 00:23:51.420 START TEST nvmf_shutdown_tc3 00:23:51.420 ************************************ 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:51.420 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:51.420 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:51.420 Found net devices under 0000:84:00.0: cvl_0_0 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.420 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:51.421 Found net devices under 0000:84:00.1: cvl_0_1 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.421 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.421 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.421 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.421 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.421 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:23:51.680 00:23:51.680 --- 10.0.0.2 ping statistics --- 00:23:51.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.680 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:23:51.680 00:23:51.680 --- 10.0.0.1 ping statistics --- 00:23:51.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.680 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2170235 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2170235 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2170235 ']' 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.680 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.680 [2024-07-26 11:31:47.237556] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:51.680 [2024-07-26 11:31:47.237654] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.680 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.939 [2024-07-26 11:31:47.357563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.939 [2024-07-26 11:31:47.500578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.939 [2024-07-26 11:31:47.500648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.939 [2024-07-26 11:31:47.500664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.939 [2024-07-26 11:31:47.500678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.939 [2024-07-26 11:31:47.500690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.939 [2024-07-26 11:31:47.500770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.939 [2024-07-26 11:31:47.500853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.939 [2024-07-26 11:31:47.500918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.939 [2024-07-26 11:31:47.500923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.198 [2024-07-26 11:31:47.692762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.198 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.198 Malloc1 00:23:52.198 [2024-07-26 11:31:47.791167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.198 Malloc2 00:23:52.457 Malloc3 00:23:52.457 Malloc4 00:23:52.457 Malloc5 00:23:52.457 Malloc6 00:23:52.457 Malloc7 00:23:52.716 Malloc8 00:23:52.716 Malloc9 00:23:52.716 Malloc10 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2170415 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2170415 /var/tmp/bdevperf.sock 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2170415 ']' 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.716 { 00:23:52.716 "params": { 00:23:52.716 "name": "Nvme$subsystem", 00:23:52.716 "trtype": "$TEST_TRANSPORT", 00:23:52.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.716 "adrfam": "ipv4", 00:23:52.716 "trsvcid": "$NVMF_PORT", 00:23:52.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.716 "hdgst": ${hdgst:-false}, 00:23:52.716 "ddgst": ${ddgst:-false} 00:23:52.716 }, 00:23:52.716 "method": "bdev_nvme_attach_controller" 00:23:52.716 } 00:23:52.716 EOF 00:23:52.716 )") 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.716 { 00:23:52.716 "params": { 00:23:52.716 "name": "Nvme$subsystem", 00:23:52.716 "trtype": "$TEST_TRANSPORT", 00:23:52.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.716 "adrfam": "ipv4", 00:23:52.716 "trsvcid": "$NVMF_PORT", 00:23:52.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.716 "hdgst": ${hdgst:-false}, 00:23:52.716 "ddgst": ${ddgst:-false} 00:23:52.716 }, 00:23:52.716 "method": "bdev_nvme_attach_controller" 00:23:52.716 } 00:23:52.716 EOF 00:23:52.716 )") 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.716 { 00:23:52.716 "params": { 00:23:52.716 "name": "Nvme$subsystem", 00:23:52.716 "trtype": "$TEST_TRANSPORT", 00:23:52.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.716 "adrfam": "ipv4", 00:23:52.716 "trsvcid": "$NVMF_PORT", 00:23:52.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.716 "hdgst": ${hdgst:-false}, 00:23:52.716 "ddgst": ${ddgst:-false} 00:23:52.716 }, 00:23:52.716 "method": "bdev_nvme_attach_controller" 00:23:52.716 } 00:23:52.716 EOF 00:23:52.716 )") 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.716 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.716 { 00:23:52.716 "params": { 00:23:52.716 "name": "Nvme$subsystem", 00:23:52.716 "trtype": "$TEST_TRANSPORT", 00:23:52.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.716 "adrfam": "ipv4", 00:23:52.716 "trsvcid": "$NVMF_PORT", 00:23:52.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.716 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.717 { 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme$subsystem", 00:23:52.717 "trtype": "$TEST_TRANSPORT", 00:23:52.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "$NVMF_PORT", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.717 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.717 { 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme$subsystem", 00:23:52.717 "trtype": "$TEST_TRANSPORT", 00:23:52.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "$NVMF_PORT", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.717 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.717 { 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme$subsystem", 00:23:52.717 "trtype": "$TEST_TRANSPORT", 00:23:52.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "$NVMF_PORT", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.717 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.717 { 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme$subsystem", 00:23:52.717 "trtype": "$TEST_TRANSPORT", 00:23:52.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "$NVMF_PORT", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.717 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.717 { 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme$subsystem", 00:23:52.717 "trtype": "$TEST_TRANSPORT", 00:23:52.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "$NVMF_PORT", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.717 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:52.717 { 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme$subsystem", 00:23:52.717 "trtype": "$TEST_TRANSPORT", 00:23:52.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "$NVMF_PORT", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.717 "hdgst": ${hdgst:-false}, 00:23:52.717 "ddgst": ${ddgst:-false} 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 } 00:23:52.717 EOF 00:23:52.717 )") 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:52.717 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme1", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme2", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme3", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme4", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme5", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme6", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.717 "params": { 00:23:52.717 "name": "Nvme7", 00:23:52.717 "trtype": "tcp", 00:23:52.717 "traddr": "10.0.0.2", 00:23:52.717 "adrfam": "ipv4", 00:23:52.717 "trsvcid": "4420", 00:23:52.717 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:52.717 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:52.717 "hdgst": false, 00:23:52.717 "ddgst": false 00:23:52.717 }, 00:23:52.717 "method": "bdev_nvme_attach_controller" 00:23:52.717 },{ 00:23:52.718 "params": { 00:23:52.718 "name": "Nvme8", 00:23:52.718 "trtype": "tcp", 00:23:52.718 "traddr": "10.0.0.2", 00:23:52.718 "adrfam": "ipv4", 00:23:52.718 "trsvcid": "4420", 00:23:52.718 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:52.718 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:52.718 "hdgst": false, 00:23:52.718 "ddgst": false 00:23:52.718 }, 00:23:52.718 "method": "bdev_nvme_attach_controller" 00:23:52.718 },{ 00:23:52.718 "params": { 00:23:52.718 "name": "Nvme9", 00:23:52.718 "trtype": "tcp", 00:23:52.718 "traddr": "10.0.0.2", 00:23:52.718 "adrfam": "ipv4", 00:23:52.718 "trsvcid": "4420", 00:23:52.718 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:52.718 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:52.718 "hdgst": false, 00:23:52.718 "ddgst": false 00:23:52.718 }, 00:23:52.718 "method": "bdev_nvme_attach_controller" 00:23:52.718 },{ 00:23:52.718 "params": { 00:23:52.718 "name": "Nvme10", 00:23:52.718 "trtype": "tcp", 00:23:52.718 "traddr": "10.0.0.2", 00:23:52.718 "adrfam": "ipv4", 00:23:52.718 "trsvcid": "4420", 00:23:52.718 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:52.718 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:52.718 "hdgst": false, 00:23:52.718 "ddgst": false 00:23:52.718 }, 00:23:52.718 "method": "bdev_nvme_attach_controller" 00:23:52.718 }' 00:23:52.718 [2024-07-26 11:31:48.368920] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:23:52.718 [2024-07-26 11:31:48.369007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170415 ] 00:23:52.976 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.976 [2024-07-26 11:31:48.446727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.976 [2024-07-26 11:31:48.570212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.876 Running I/O for 10 seconds... 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.876 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=16 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 16 -ge 100 ']' 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.172 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.429 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:55.429 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:55.429 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2170235 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2170235 ']' 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2170235 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2170235 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:55.703 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:55.704 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2170235' 00:23:55.704 killing process with pid 2170235 00:23:55.704 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2170235 00:23:55.704 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2170235 00:23:55.704 [2024-07-26 11:31:51.160172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.160861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1441dc0 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.704 [2024-07-26 11:31:51.162799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.162990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-26 11:31:51.163133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:55.705 the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.163169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443f20 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.163181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.705 [2024-07-26 11:31:51.163196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.163211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.705 [2024-07-26 11:31:51.163226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.163241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.705 [2024-07-26 11:31:51.163255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.163269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.164646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1442280 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.166070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1442740 is same with the state(5) to be set 00:23:55.705 [2024-07-26 11:31:51.166886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.166918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.166947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.166965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.166984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.166999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.705 [2024-07-26 11:31:51.167587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.705 [2024-07-26 11:31:51.167602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.167945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.167966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.167967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12the state(5) to be set 00:23:55.706 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.167984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.167986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.706 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12the state(5) to be set 00:23:55.706 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 11:31:51.168156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12[2024-07-26 11:31:51.168245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 11:31:51.168262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-07-26 11:31:51.168347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 11:31:51.168363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 the state(5) to be set 00:23:55.706 [2024-07-26 11:31:51.168384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12the state(5) to be set 00:23:55.706 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.706 [2024-07-26 11:31:51.168399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.706 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.706 [2024-07-26 11:31:51.168424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:12the state(5) to be set 00:23:55.707 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:12the state(5) to be set 00:23:55.707 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.707 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:12the state(5) to be set 00:23:55.707 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with [2024-07-26 11:31:51.168881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.707 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14430e0 is same with the state(5) to be set 00:23:55.707 [2024-07-26 11:31:51.168899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.168976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.168993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.169007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.169023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.169037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.169132] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e795b0 was disconnected and freed. reset controller. 00:23:55.707 [2024-07-26 11:31:51.169230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.169251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.169273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.169294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.169322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.169336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.169354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.707 [2024-07-26 11:31:51.169379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.707 [2024-07-26 11:31:51.169396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.169981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.169998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.708 [2024-07-26 11:31:51.170265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.708 [2024-07-26 11:31:51.170283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.708 [2024-07-26 11:31:51.170297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.708 [2024-07-26 11:31:51.170314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.708 [2024-07-26 11:31:51.170319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.708 [2024-07-26 11:31:51.170330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.708 [2024-07-26 11:31:51.170332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with [2024-07-26 11:31:51.170347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128the state(5) to be set 00:23:55.709 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with [2024-07-26 11:31:51.170363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.709 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-07-26 11:31:51.170503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-07-26 11:31:51.170666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 11:31:51.170681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12[2024-07-26 11:31:51.170733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 11:31:51.170749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with [2024-07-26 11:31:51.170931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12the state(5) to be set 00:23:55.709 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with [2024-07-26 11:31:51.170947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.709 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.709 [2024-07-26 11:31:51.170975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.709 [2024-07-26 11:31:51.170982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.709 [2024-07-26 11:31:51.170988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.170999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with [2024-07-26 11:31:51.171030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12the state(5) to be set 00:23:55.710 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with [2024-07-26 11:31:51.171047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:55.710 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-07-26 11:31:51.171131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-26 11:31:51.171147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679610 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.171211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.710 [2024-07-26 11:31:51.171368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.710 [2024-07-26 11:31:51.171468] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d1c470 was disconnected and freed. reset controller. 00:23:55.710 [2024-07-26 11:31:51.172822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.172992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.710 [2024-07-26 11:31:51.173448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.173722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679ad0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.174896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.174931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.174947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.174966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.174979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.174993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.175813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14435a0 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.176612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.711 [2024-07-26 11:31:51.176640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.176981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.177480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1443a60 is same with the state(5) to be set 00:23:55.712 [2024-07-26 11:31:51.194991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:55.712 [2024-07-26 11:31:51.195132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:55.712 [2024-07-26 11:31:51.195202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d52420 (9): Bad file descriptor 00:23:55.712 [2024-07-26 11:31:51.195235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4f120 (9): Bad file descriptor 00:23:55.712 [2024-07-26 11:31:51.195291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.712 [2024-07-26 11:31:51.195315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.712 [2024-07-26 11:31:51.195332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.712 [2024-07-26 11:31:51.195346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.712 [2024-07-26 11:31:51.195361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec6b30 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.195494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4eba0 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.195666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1910200 (9): Bad file descriptor 00:23:55.713 [2024-07-26 11:31:51.195733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85fc0 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.195928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.195980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.195994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee950 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.196100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824610 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.196295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede6e0 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.196492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.713 [2024-07-26 11:31:51.196611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede490 is same with the state(5) to be set 00:23:55.713 [2024-07-26 11:31:51.196753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.196980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.196994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.197011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.197025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.197042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.197056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.713 [2024-07-26 11:31:51.197073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.713 [2024-07-26 11:31:51.197088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.197972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.197988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.714 [2024-07-26 11:31:51.198324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.714 [2024-07-26 11:31:51.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.198872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.198993] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1df97b0 was disconnected and freed. reset controller. 00:23:55.715 [2024-07-26 11:31:51.199094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.715 [2024-07-26 11:31:51.199653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.715 [2024-07-26 11:31:51.199670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.199970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.199985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.716 [2024-07-26 11:31:51.200774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.716 [2024-07-26 11:31:51.200789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.200820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.200851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.200882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.200944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.200976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.200993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.201007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.201024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.201055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.201069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.201090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.201122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.201137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.201153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.717 [2024-07-26 11:31:51.201168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.717 [2024-07-26 11:31:51.201260] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e781c0 was disconnected and freed. reset controller. 00:23:55.717 [2024-07-26 11:31:51.205454] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:55.717 [2024-07-26 11:31:51.205507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:55.717 [2024-07-26 11:31:51.205534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:55.717 [2024-07-26 11:31:51.205562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee950 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.205777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.717 [2024-07-26 11:31:51.205828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4f120 with addr=10.0.0.2, port=4420 00:23:55.717 [2024-07-26 11:31:51.205847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f120 is same with the state(5) to be set 00:23:55.717 [2024-07-26 11:31:51.206154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.717 [2024-07-26 11:31:51.206202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d52420 with addr=10.0.0.2, port=4420 00:23:55.717 [2024-07-26 11:31:51.206219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52420 is same with the state(5) to be set 00:23:55.717 [2024-07-26 11:31:51.206244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6b30 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.206279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4eba0 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.206313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e85fc0 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.206357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1824610 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.206401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede6e0 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.206442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede490 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.206598] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:55.717 [2024-07-26 11:31:51.207107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.717 [2024-07-26 11:31:51.207157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1910200 with addr=10.0.0.2, port=4420 00:23:55.717 [2024-07-26 11:31:51.207175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(5) to be set 00:23:55.717 [2024-07-26 11:31:51.207207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4f120 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.207240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d52420 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.207998] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:55.717 [2024-07-26 11:31:51.208083] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:55.717 [2024-07-26 11:31:51.208163] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:55.717 [2024-07-26 11:31:51.208249] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:55.717 [2024-07-26 11:31:51.208437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.717 [2024-07-26 11:31:51.208468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee950 with addr=10.0.0.2, port=4420 00:23:55.717 [2024-07-26 11:31:51.208485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee950 is same with the state(5) to be set 00:23:55.717 [2024-07-26 11:31:51.208506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1910200 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.208525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:55.717 [2024-07-26 11:31:51.208539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:55.717 [2024-07-26 11:31:51.208554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:55.717 [2024-07-26 11:31:51.208578] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:55.717 [2024-07-26 11:31:51.208593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:55.717 [2024-07-26 11:31:51.208607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:55.717 [2024-07-26 11:31:51.208731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.717 [2024-07-26 11:31:51.208755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.717 [2024-07-26 11:31:51.208772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee950 (9): Bad file descriptor 00:23:55.717 [2024-07-26 11:31:51.208790] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.717 [2024-07-26 11:31:51.208803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:55.718 [2024-07-26 11:31:51.208818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.718 [2024-07-26 11:31:51.208901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.718 [2024-07-26 11:31:51.208922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:55.718 [2024-07-26 11:31:51.208935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:55.718 [2024-07-26 11:31:51.208949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:55.718 [2024-07-26 11:31:51.209006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.718 [2024-07-26 11:31:51.212051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:55.718 [2024-07-26 11:31:51.212085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:55.718 [2024-07-26 11:31:51.212381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.718 [2024-07-26 11:31:51.212438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d52420 with addr=10.0.0.2, port=4420 00:23:55.718 [2024-07-26 11:31:51.212458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52420 is same with the state(5) to be set 00:23:55.718 [2024-07-26 11:31:51.212638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.718 [2024-07-26 11:31:51.212668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4f120 with addr=10.0.0.2, port=4420 00:23:55.718 [2024-07-26 11:31:51.212685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f120 is same with the state(5) to be set 00:23:55.718 [2024-07-26 11:31:51.212744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d52420 (9): Bad file descriptor 00:23:55.718 [2024-07-26 11:31:51.212768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4f120 (9): Bad file descriptor 00:23:55.718 [2024-07-26 11:31:51.212824] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:55.718 [2024-07-26 11:31:51.212843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:55.718 [2024-07-26 11:31:51.212861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:55.718 [2024-07-26 11:31:51.212881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:55.718 [2024-07-26 11:31:51.212895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:55.718 [2024-07-26 11:31:51.212909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:55.718 [2024-07-26 11:31:51.212967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.718 [2024-07-26 11:31:51.212985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.718 [2024-07-26 11:31:51.215745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.215979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.215996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.718 [2024-07-26 11:31:51.216491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.718 [2024-07-26 11:31:51.216506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.216975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.216990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.719 [2024-07-26 11:31:51.217687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.719 [2024-07-26 11:31:51.217702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.217718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.217733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.217749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.217764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.217780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.217795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.217811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.217826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.217841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d920 is same with the state(5) to be set 00:23:55.720 [2024-07-26 11:31:51.219261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.219976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.219993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.220038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.220055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.220069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.220086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.220100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.220117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.720 [2024-07-26 11:31:51.220132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.720 [2024-07-26 11:31:51.220148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.220981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.220996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.221312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.221328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1ee10 is same with the state(5) to be set 00:23:55.721 [2024-07-26 11:31:51.222727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.222753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.222776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.721 [2024-07-26 11:31:51.222792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.721 [2024-07-26 11:31:51.222809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.222823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.222840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.222860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.222878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.222893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.222910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.222924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.222941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.222956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.222973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.222987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.722 [2024-07-26 11:31:51.223636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.722 [2024-07-26 11:31:51.223653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.223979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.223993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.723 [2024-07-26 11:31:51.224793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.723 [2024-07-26 11:31:51.224808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2650390 is same with the state(5) to be set 00:23:55.724 [2024-07-26 11:31:51.226184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.226978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.226993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.724 [2024-07-26 11:31:51.227448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.724 [2024-07-26 11:31:51.227465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.227972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.227986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.228264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.228279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f7dd0 is same with the state(5) to be set 00:23:55.725 [2024-07-26 11:31:51.229675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.229979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.230010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.230024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.230041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.230056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.230072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.725 [2024-07-26 11:31:51.230092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.725 [2024-07-26 11:31:51.230109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.230970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.230987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.726 [2024-07-26 11:31:51.231343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.726 [2024-07-26 11:31:51.231358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.231754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.231769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1820 is same with the state(5) to be set 00:23:55.727 [2024-07-26 11:31:51.234357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.234972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.234986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.235003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.727 [2024-07-26 11:31:51.235018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.727 [2024-07-26 11:31:51.235034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.235977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.235991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.728 [2024-07-26 11:31:51.236257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.728 [2024-07-26 11:31:51.236271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.729 [2024-07-26 11:31:51.236288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.729 [2024-07-26 11:31:51.236302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.729 [2024-07-26 11:31:51.236319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.729 [2024-07-26 11:31:51.236333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.729 [2024-07-26 11:31:51.236354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.729 [2024-07-26 11:31:51.236370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.729 [2024-07-26 11:31:51.236387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.729 [2024-07-26 11:31:51.236401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.729 [2024-07-26 11:31:51.236418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.729 [2024-07-26 11:31:51.236439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.729 [2024-07-26 11:31:51.236456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2cf0 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.238197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:55.729 [2024-07-26 11:31:51.238231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:55.729 [2024-07-26 11:31:51.238252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:55.729 [2024-07-26 11:31:51.238368] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.238399] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.238421] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.238449] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.238550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:55.729 [2024-07-26 11:31:51.238577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:55.729 task offset: 32128 on job bdev=Nvme3n1 fails 00:23:55.729 00:23:55.729 Latency(us) 00:23:55.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.729 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme1n1 ended in about 0.99 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme1n1 : 0.99 148.67 9.29 64.73 0.00 296497.66 19806.44 292047.83 00:23:55.729 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme2n1 ended in about 0.99 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme2n1 : 0.99 193.94 12.12 64.65 0.00 239618.37 11553.75 274959.93 00:23:55.729 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme3n1 ended in about 0.98 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme3n1 : 0.98 196.15 12.26 65.38 0.00 231802.31 19709.35 274959.93 00:23:55.729 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme4n1 : 0.98 195.90 12.24 65.30 0.00 227062.14 20291.89 284280.60 00:23:55.729 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme5n1 ended in about 1.00 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme5n1 : 1.00 127.41 7.96 63.70 0.00 304347.46 23301.69 299815.06 00:23:55.729 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme6n1 ended in about 1.01 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme6n1 : 1.01 126.97 7.94 63.48 0.00 298983.47 22719.15 285834.05 00:23:55.729 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme7n1 ended in about 1.01 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme7n1 : 1.01 126.53 7.91 63.27 0.00 293534.53 20486.07 285834.05 00:23:55.729 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme8n1 ended in about 1.02 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme8n1 : 1.02 126.10 7.88 63.05 0.00 288148.04 24272.59 274959.93 00:23:55.729 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme9n1 ended in about 1.02 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme9n1 : 1.02 125.67 7.85 62.84 0.00 282802.25 22913.33 259425.47 00:23:55.729 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:55.729 Job: Nvme10n1 ended in about 1.02 seconds with error 00:23:55.729 Verification LBA range: start 0x0 length 0x400 00:23:55.729 Nvme10n1 : 1.02 125.10 7.82 62.55 0.00 277837.50 21651.15 313796.08 00:23:55.729 =================================================================================================================== 00:23:55.729 Total : 1492.44 93.28 638.95 0.00 270548.11 11553.75 313796.08 00:23:55.729 [2024-07-26 11:31:51.266418] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:55.729 [2024-07-26 11:31:51.266506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:55.729 [2024-07-26 11:31:51.266542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:55.729 [2024-07-26 11:31:51.266957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.267012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4eba0 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.267033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4eba0 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.267172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.267219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1824610 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.267237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1824610 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.267444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.267473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec6b30 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.267490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec6b30 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.269367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:55.729 [2024-07-26 11:31:51.269398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:55.729 [2024-07-26 11:31:51.269694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.269725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ede6e0 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.269743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede6e0 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.269910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.269975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ede490 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.269993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede490 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.270155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.270200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e85fc0 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.270216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e85fc0 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.270443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.270472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1910200 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.270488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.270516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4eba0 (9): Bad file descriptor 00:23:55.729 [2024-07-26 11:31:51.270539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1824610 (9): Bad file descriptor 00:23:55.729 [2024-07-26 11:31:51.270559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6b30 (9): Bad file descriptor 00:23:55.729 [2024-07-26 11:31:51.270621] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.270661] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.270683] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.270705] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:55.729 [2024-07-26 11:31:51.270803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:55.729 [2024-07-26 11:31:51.271053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.271101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eee950 with addr=10.0.0.2, port=4420 00:23:55.729 [2024-07-26 11:31:51.271119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eee950 is same with the state(5) to be set 00:23:55.729 [2024-07-26 11:31:51.271318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.729 [2024-07-26 11:31:51.271372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4f120 with addr=10.0.0.2, port=4420 00:23:55.730 [2024-07-26 11:31:51.271389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f120 is same with the state(5) to be set 00:23:55.730 [2024-07-26 11:31:51.271409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede6e0 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.271445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede490 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.271467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e85fc0 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.271486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1910200 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.271507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.271520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.271536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:55.730 [2024-07-26 11:31:51.271557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.271578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.271593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:55.730 [2024-07-26 11:31:51.271611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.271626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.271640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:55.730 [2024-07-26 11:31:51.271746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.271770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.271783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.730 [2024-07-26 11:31:51.272054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d52420 with addr=10.0.0.2, port=4420 00:23:55.730 [2024-07-26 11:31:51.272070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52420 is same with the state(5) to be set 00:23:55.730 [2024-07-26 11:31:51.272090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eee950 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.272110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4f120 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.272126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d52420 (9): Bad file descriptor 00:23:55.730 [2024-07-26 11:31:51.272444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.730 [2024-07-26 11:31:51.272590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:55.730 [2024-07-26 11:31:51.272604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:55.730 [2024-07-26 11:31:51.272617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:55.730 [2024-07-26 11:31:51.272659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.297 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:56.297 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2170415 00:23:57.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2170415) - No such process 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.232 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.232 rmmod nvme_tcp 00:23:57.232 rmmod nvme_fabrics 00:23:57.232 rmmod nvme_keyring 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.491 11:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.394 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.394 00:23:59.394 real 0m8.021s 00:23:59.394 user 0m20.026s 00:23:59.394 sys 0m1.664s 00:23:59.394 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:59.394 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 ************************************ 00:23:59.394 END TEST nvmf_shutdown_tc3 00:23:59.394 ************************************ 00:23:59.394 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:59.394 00:23:59.394 real 0m29.400s 00:23:59.394 user 1m22.251s 00:23:59.394 sys 0m7.173s 00:23:59.394 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:59.394 11:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 ************************************ 00:23:59.394 END TEST nvmf_shutdown 00:23:59.394 ************************************ 00:23:59.394 11:31:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:59.394 00:23:59.394 real 12m1.109s 00:23:59.394 user 28m43.857s 00:23:59.394 sys 2m53.465s 00:23:59.394 11:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:59.394 11:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.394 ************************************ 00:23:59.394 END TEST nvmf_target_extra 00:23:59.394 ************************************ 00:23:59.394 11:31:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:59.394 11:31:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:59.394 11:31:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:59.394 11:31:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.653 ************************************ 00:23:59.653 START TEST nvmf_host 00:23:59.653 ************************************ 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:59.653 * Looking for test storage... 00:23:59.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.653 11:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.654 ************************************ 00:23:59.654 START TEST nvmf_multicontroller 00:23:59.654 ************************************ 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:59.654 * Looking for test storage... 00:23:59.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.654 11:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.189 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:02.190 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:02.190 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:02.190 Found net devices under 0000:84:00.0: cvl_0_0 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:02.190 Found net devices under 0000:84:00.1: cvl_0_1 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:02.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:24:02.190 00:24:02.190 --- 10.0.0.2 ping statistics --- 00:24:02.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.190 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:24:02.190 00:24:02.190 --- 10.0.0.1 ping statistics --- 00:24:02.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.190 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2172985 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2172985 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2172985 ']' 00:24:02.190 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.191 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.191 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.191 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.191 11:31:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.191 [2024-07-26 11:31:57.832797] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:02.191 [2024-07-26 11:31:57.832899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.449 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.449 [2024-07-26 11:31:57.922209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:02.449 [2024-07-26 11:31:58.067023] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.449 [2024-07-26 11:31:58.067092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.450 [2024-07-26 11:31:58.067113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.450 [2024-07-26 11:31:58.067131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.450 [2024-07-26 11:31:58.067147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.450 [2024-07-26 11:31:58.067272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.450 [2024-07-26 11:31:58.067358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.450 [2024-07-26 11:31:58.067362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.708 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.708 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.709 [2024-07-26 11:31:58.299288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.709 Malloc0 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.709 [2024-07-26 11:31:58.362750] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.709 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.967 [2024-07-26 11:31:58.370610] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.967 Malloc1 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2173135 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2173135 /var/tmp/bdevperf.sock 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2173135 ']' 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.967 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.226 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.226 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:03.226 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:03.226 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.226 11:31:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.485 NVMe0n1 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.485 1 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.485 request: 00:24:03.485 { 00:24:03.485 "name": "NVMe0", 00:24:03.485 "trtype": "tcp", 00:24:03.485 "traddr": "10.0.0.2", 00:24:03.485 "adrfam": "ipv4", 00:24:03.485 "trsvcid": "4420", 00:24:03.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.485 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:03.485 "hostaddr": "10.0.0.2", 00:24:03.485 "hostsvcid": "60000", 00:24:03.485 "prchk_reftag": false, 00:24:03.485 "prchk_guard": false, 00:24:03.485 "hdgst": false, 00:24:03.485 "ddgst": false, 00:24:03.485 "method": "bdev_nvme_attach_controller", 00:24:03.485 "req_id": 1 00:24:03.485 } 00:24:03.485 Got JSON-RPC error response 00:24:03.485 response: 00:24:03.485 { 00:24:03.485 "code": -114, 00:24:03.485 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:03.485 } 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.485 request: 00:24:03.485 { 00:24:03.485 "name": "NVMe0", 00:24:03.485 "trtype": "tcp", 00:24:03.485 "traddr": "10.0.0.2", 00:24:03.485 "adrfam": "ipv4", 00:24:03.485 "trsvcid": "4420", 00:24:03.485 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:03.485 "hostaddr": "10.0.0.2", 00:24:03.485 "hostsvcid": "60000", 00:24:03.485 "prchk_reftag": false, 00:24:03.485 "prchk_guard": false, 00:24:03.485 "hdgst": false, 00:24:03.485 "ddgst": false, 00:24:03.485 "method": "bdev_nvme_attach_controller", 00:24:03.485 "req_id": 1 00:24:03.485 } 00:24:03.485 Got JSON-RPC error response 00:24:03.485 response: 00:24:03.485 { 00:24:03.485 "code": -114, 00:24:03.485 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:03.485 } 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.485 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.485 request: 00:24:03.485 { 00:24:03.485 "name": "NVMe0", 00:24:03.485 "trtype": "tcp", 00:24:03.485 "traddr": "10.0.0.2", 00:24:03.485 "adrfam": "ipv4", 00:24:03.485 "trsvcid": "4420", 00:24:03.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.485 "hostaddr": "10.0.0.2", 00:24:03.485 "hostsvcid": "60000", 00:24:03.485 "prchk_reftag": false, 00:24:03.485 "prchk_guard": false, 00:24:03.486 "hdgst": false, 00:24:03.486 "ddgst": false, 00:24:03.486 "multipath": "disable", 00:24:03.486 "method": "bdev_nvme_attach_controller", 00:24:03.486 "req_id": 1 00:24:03.486 } 00:24:03.486 Got JSON-RPC error response 00:24:03.486 response: 00:24:03.486 { 00:24:03.486 "code": -114, 00:24:03.486 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:03.486 } 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.486 request: 00:24:03.486 { 00:24:03.486 "name": "NVMe0", 00:24:03.486 "trtype": "tcp", 00:24:03.486 "traddr": "10.0.0.2", 00:24:03.486 "adrfam": "ipv4", 00:24:03.486 "trsvcid": "4420", 00:24:03.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.486 "hostaddr": "10.0.0.2", 00:24:03.486 "hostsvcid": "60000", 00:24:03.486 "prchk_reftag": false, 00:24:03.486 "prchk_guard": false, 00:24:03.486 "hdgst": false, 00:24:03.486 "ddgst": false, 00:24:03.486 "multipath": "failover", 00:24:03.486 "method": "bdev_nvme_attach_controller", 00:24:03.486 "req_id": 1 00:24:03.486 } 00:24:03.486 Got JSON-RPC error response 00:24:03.486 response: 00:24:03.486 { 00:24:03.486 "code": -114, 00:24:03.486 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:03.486 } 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.486 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.744 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.744 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:03.744 11:31:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.119 0 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2173135 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2173135 ']' 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2173135 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2173135 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2173135' 00:24:05.119 killing process with pid 2173135 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2173135 00:24:05.119 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2173135 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:05.378 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:05.378 [2024-07-26 11:31:58.483111] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:05.378 [2024-07-26 11:31:58.483217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173135 ] 00:24:05.378 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.378 [2024-07-26 11:31:58.552002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.378 [2024-07-26 11:31:58.678217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.378 [2024-07-26 11:31:59.339741] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 943c3600-7ea8-432e-b611-84f9fce33964 already exists 00:24:05.378 [2024-07-26 11:31:59.339789] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:943c3600-7ea8-432e-b611-84f9fce33964 alias for bdev NVMe1n1 00:24:05.378 [2024-07-26 11:31:59.339807] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:05.378 Running I/O for 1 seconds... 00:24:05.378 00:24:05.378 Latency(us) 00:24:05.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.378 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:05.378 NVMe0n1 : 1.00 17142.11 66.96 0.00 0.00 7455.16 2208.81 13495.56 00:24:05.378 =================================================================================================================== 00:24:05.378 Total : 17142.11 66.96 0.00 0.00 7455.16 2208.81 13495.56 00:24:05.378 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.378 00:24:05.378 Latency(us) 00:24:05.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.378 =================================================================================================================== 00:24:05.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.378 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.378 rmmod nvme_tcp 00:24:05.378 rmmod nvme_fabrics 00:24:05.378 rmmod nvme_keyring 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2172985 ']' 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2172985 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2172985 ']' 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2172985 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2172985 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2172985' 00:24:05.378 killing process with pid 2172985 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2172985 00:24:05.378 11:32:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2172985 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.945 11:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.865 00:24:07.865 real 0m8.274s 00:24:07.865 user 0m13.177s 00:24:07.865 sys 0m2.763s 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.865 ************************************ 00:24:07.865 END TEST nvmf_multicontroller 00:24:07.865 ************************************ 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.865 ************************************ 00:24:07.865 START TEST nvmf_aer 00:24:07.865 ************************************ 00:24:07.865 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:08.125 * Looking for test storage... 00:24:08.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.125 11:32:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:10.659 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:10.659 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:10.659 Found net devices under 0000:84:00.0: cvl_0_0 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:10.659 Found net devices under 0000:84:00.1: cvl_0_1 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.659 11:32:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:10.659 00:24:10.659 --- 10.0.0.2 ping statistics --- 00:24:10.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.659 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:10.659 00:24:10.659 --- 10.0.0.1 ping statistics --- 00:24:10.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.659 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.659 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2175372 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2175372 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2175372 ']' 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.660 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.660 [2024-07-26 11:32:06.193154] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:10.660 [2024-07-26 11:32:06.193253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.660 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.660 [2024-07-26 11:32:06.271007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.918 [2024-07-26 11:32:06.398804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.918 [2024-07-26 11:32:06.398870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.918 [2024-07-26 11:32:06.398887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.918 [2024-07-26 11:32:06.398901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.918 [2024-07-26 11:32:06.398913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.918 [2024-07-26 11:32:06.399000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.918 [2024-07-26 11:32:06.399059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.918 [2024-07-26 11:32:06.399110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.918 [2024-07-26 11:32:06.399113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:10.918 [2024-07-26 11:32:06.568276] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.918 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.177 Malloc0 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.177 [2024-07-26 11:32:06.622599] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.177 [ 00:24:11.177 { 00:24:11.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.177 "subtype": "Discovery", 00:24:11.177 "listen_addresses": [], 00:24:11.177 "allow_any_host": true, 00:24:11.177 "hosts": [] 00:24:11.177 }, 00:24:11.177 { 00:24:11.177 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.177 "subtype": "NVMe", 00:24:11.177 "listen_addresses": [ 00:24:11.177 { 00:24:11.177 "trtype": "TCP", 00:24:11.177 "adrfam": "IPv4", 00:24:11.177 "traddr": "10.0.0.2", 00:24:11.177 "trsvcid": "4420" 00:24:11.177 } 00:24:11.177 ], 00:24:11.177 "allow_any_host": true, 00:24:11.177 "hosts": [], 00:24:11.177 "serial_number": "SPDK00000000000001", 00:24:11.177 "model_number": "SPDK bdev Controller", 00:24:11.177 "max_namespaces": 2, 00:24:11.177 "min_cntlid": 1, 00:24:11.177 "max_cntlid": 65519, 00:24:11.177 "namespaces": [ 00:24:11.177 { 00:24:11.177 "nsid": 1, 00:24:11.177 "bdev_name": "Malloc0", 00:24:11.177 "name": "Malloc0", 00:24:11.177 "nguid": "378324C2E47E4144A85565641A649B0E", 00:24:11.177 "uuid": "378324c2-e47e-4144-a855-65641a649b0e" 00:24:11.177 } 00:24:11.177 ] 00:24:11.177 } 00:24:11.177 ] 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2175506 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.177 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:11.177 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:24:11.435 11:32:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.435 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.435 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.435 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:11.436 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:11.436 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.436 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 Malloc1 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 [ 00:24:11.694 { 00:24:11.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.694 "subtype": "Discovery", 00:24:11.694 "listen_addresses": [], 00:24:11.694 "allow_any_host": true, 00:24:11.694 "hosts": [] 00:24:11.694 }, 00:24:11.694 { 00:24:11.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.694 "subtype": "NVMe", 00:24:11.694 "listen_addresses": [ 00:24:11.694 { 00:24:11.694 "trtype": "TCP", 00:24:11.694 "adrfam": "IPv4", 00:24:11.694 "traddr": "10.0.0.2", 00:24:11.694 "trsvcid": "4420" 00:24:11.694 } 00:24:11.694 ], 00:24:11.694 "allow_any_host": true, 00:24:11.694 "hosts": [], 00:24:11.694 "serial_number": "SPDK00000000000001", 00:24:11.694 "model_number": "SPDK bdev Controller", 00:24:11.694 "max_namespaces": 2, 00:24:11.694 "min_cntlid": 1, 00:24:11.694 "max_cntlid": 65519, 00:24:11.694 "namespaces": [ 00:24:11.694 { 00:24:11.694 "nsid": 1, 00:24:11.694 "bdev_name": "Malloc0", 00:24:11.694 "name": "Malloc0", 00:24:11.694 "nguid": "378324C2E47E4144A85565641A649B0E", 00:24:11.694 "uuid": "378324c2-e47e-4144-a855-65641a649b0e" 00:24:11.694 }, 00:24:11.694 { 00:24:11.694 "nsid": 2, 00:24:11.694 "bdev_name": "Malloc1", 00:24:11.694 "name": "Malloc1", 00:24:11.694 "nguid": "0B465DE8BE9942CB9079671EF6E92FB6", 00:24:11.694 "uuid": "0b465de8-be99-42cb-9079-671ef6e92fb6" 00:24:11.694 } 00:24:11.694 ] 00:24:11.694 } 00:24:11.694 ] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2175506 00:24:11.694 Asynchronous Event Request test 00:24:11.694 Attaching to 10.0.0.2 00:24:11.694 Attached to 10.0.0.2 00:24:11.694 Registering asynchronous event callbacks... 00:24:11.694 Starting namespace attribute notice tests for all controllers... 00:24:11.694 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:11.694 aer_cb - Changed Namespace 00:24:11.694 Cleaning up... 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.694 rmmod nvme_tcp 00:24:11.694 rmmod nvme_fabrics 00:24:11.694 rmmod nvme_keyring 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2175372 ']' 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2175372 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2175372 ']' 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2175372 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2175372 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:11.694 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2175372' 00:24:11.695 killing process with pid 2175372 00:24:11.695 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2175372 00:24:11.695 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2175372 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.954 11:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.488 00:24:14.488 real 0m6.107s 00:24:14.488 user 0m5.237s 00:24:14.488 sys 0m2.388s 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:14.488 ************************************ 00:24:14.488 END TEST nvmf_aer 00:24:14.488 ************************************ 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.488 ************************************ 00:24:14.488 START TEST nvmf_async_init 00:24:14.488 ************************************ 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.488 * Looking for test storage... 00:24:14.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.488 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2a113df1dfa14a7fab89f4ea0b06b095 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.489 11:32:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:17.021 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:17.021 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:17.021 Found net devices under 0000:84:00.0: cvl_0_0 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:17.021 Found net devices under 0000:84:00.1: cvl_0_1 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.021 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:17.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:17.022 00:24:17.022 --- 10.0.0.2 ping statistics --- 00:24:17.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.022 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:24:17.022 00:24:17.022 --- 10.0.0.1 ping statistics --- 00:24:17.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.022 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2177581 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2177581 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2177581 ']' 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.022 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.022 [2024-07-26 11:32:12.427135] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:17.022 [2024-07-26 11:32:12.427241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.022 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.022 [2024-07-26 11:32:12.511035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.022 [2024-07-26 11:32:12.636140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.022 [2024-07-26 11:32:12.636207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.022 [2024-07-26 11:32:12.636224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.022 [2024-07-26 11:32:12.636237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.022 [2024-07-26 11:32:12.636249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.022 [2024-07-26 11:32:12.636287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 [2024-07-26 11:32:12.793385] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 null0 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2a113df1dfa14a7fab89f4ea0b06b095 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.280 [2024-07-26 11:32:12.833696] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.280 11:32:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 nvme0n1 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [ 00:24:17.575 { 00:24:17.575 "name": "nvme0n1", 00:24:17.575 "aliases": [ 00:24:17.575 "2a113df1-dfa1-4a7f-ab89-f4ea0b06b095" 00:24:17.575 ], 00:24:17.575 "product_name": "NVMe disk", 00:24:17.575 "block_size": 512, 00:24:17.575 "num_blocks": 2097152, 00:24:17.575 "uuid": "2a113df1-dfa1-4a7f-ab89-f4ea0b06b095", 00:24:17.575 "assigned_rate_limits": { 00:24:17.575 "rw_ios_per_sec": 0, 00:24:17.575 "rw_mbytes_per_sec": 0, 00:24:17.575 "r_mbytes_per_sec": 0, 00:24:17.575 "w_mbytes_per_sec": 0 00:24:17.575 }, 00:24:17.575 "claimed": false, 00:24:17.575 "zoned": false, 00:24:17.575 "supported_io_types": { 00:24:17.575 "read": true, 00:24:17.575 "write": true, 00:24:17.575 "unmap": false, 00:24:17.575 "flush": true, 00:24:17.575 "reset": true, 00:24:17.575 "nvme_admin": true, 00:24:17.575 "nvme_io": true, 00:24:17.575 "nvme_io_md": false, 00:24:17.575 "write_zeroes": true, 00:24:17.575 "zcopy": false, 00:24:17.575 "get_zone_info": false, 00:24:17.575 "zone_management": false, 00:24:17.575 "zone_append": false, 00:24:17.575 "compare": true, 00:24:17.575 "compare_and_write": true, 00:24:17.575 "abort": true, 00:24:17.575 "seek_hole": false, 00:24:17.575 "seek_data": false, 00:24:17.575 "copy": true, 00:24:17.575 "nvme_iov_md": false 00:24:17.575 }, 00:24:17.575 "memory_domains": [ 00:24:17.575 { 00:24:17.575 "dma_device_id": "system", 00:24:17.575 "dma_device_type": 1 00:24:17.575 } 00:24:17.575 ], 00:24:17.575 "driver_specific": { 00:24:17.575 "nvme": [ 00:24:17.575 { 00:24:17.575 "trid": { 00:24:17.575 "trtype": "TCP", 00:24:17.575 "adrfam": "IPv4", 00:24:17.575 "traddr": "10.0.0.2", 00:24:17.575 "trsvcid": "4420", 00:24:17.575 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.575 }, 00:24:17.575 "ctrlr_data": { 00:24:17.575 "cntlid": 1, 00:24:17.575 "vendor_id": "0x8086", 00:24:17.575 "model_number": "SPDK bdev Controller", 00:24:17.575 "serial_number": "00000000000000000000", 00:24:17.575 "firmware_revision": "24.09", 00:24:17.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.575 "oacs": { 00:24:17.575 "security": 0, 00:24:17.575 "format": 0, 00:24:17.575 "firmware": 0, 00:24:17.575 "ns_manage": 0 00:24:17.575 }, 00:24:17.575 "multi_ctrlr": true, 00:24:17.575 "ana_reporting": false 00:24:17.575 }, 00:24:17.575 "vs": { 00:24:17.575 "nvme_version": "1.3" 00:24:17.575 }, 00:24:17.575 "ns_data": { 00:24:17.575 "id": 1, 00:24:17.575 "can_share": true 00:24:17.575 } 00:24:17.575 } 00:24:17.575 ], 00:24:17.575 "mp_policy": "active_passive" 00:24:17.575 } 00:24:17.575 } 00:24:17.575 ] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.575 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [2024-07-26 11:32:13.086797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:17.575 [2024-07-26 11:32:13.086886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x814700 (9): Bad file descriptor 00:24:17.834 [2024-07-26 11:32:13.259606] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.834 [ 00:24:17.834 { 00:24:17.834 "name": "nvme0n1", 00:24:17.834 "aliases": [ 00:24:17.834 "2a113df1-dfa1-4a7f-ab89-f4ea0b06b095" 00:24:17.834 ], 00:24:17.834 "product_name": "NVMe disk", 00:24:17.834 "block_size": 512, 00:24:17.834 "num_blocks": 2097152, 00:24:17.834 "uuid": "2a113df1-dfa1-4a7f-ab89-f4ea0b06b095", 00:24:17.834 "assigned_rate_limits": { 00:24:17.834 "rw_ios_per_sec": 0, 00:24:17.834 "rw_mbytes_per_sec": 0, 00:24:17.834 "r_mbytes_per_sec": 0, 00:24:17.834 "w_mbytes_per_sec": 0 00:24:17.834 }, 00:24:17.834 "claimed": false, 00:24:17.834 "zoned": false, 00:24:17.834 "supported_io_types": { 00:24:17.834 "read": true, 00:24:17.834 "write": true, 00:24:17.834 "unmap": false, 00:24:17.834 "flush": true, 00:24:17.834 "reset": true, 00:24:17.834 "nvme_admin": true, 00:24:17.834 "nvme_io": true, 00:24:17.834 "nvme_io_md": false, 00:24:17.834 "write_zeroes": true, 00:24:17.834 "zcopy": false, 00:24:17.834 "get_zone_info": false, 00:24:17.834 "zone_management": false, 00:24:17.834 "zone_append": false, 00:24:17.834 "compare": true, 00:24:17.834 "compare_and_write": true, 00:24:17.834 "abort": true, 00:24:17.834 "seek_hole": false, 00:24:17.834 "seek_data": false, 00:24:17.834 "copy": true, 00:24:17.834 "nvme_iov_md": false 00:24:17.834 }, 00:24:17.834 "memory_domains": [ 00:24:17.834 { 00:24:17.834 "dma_device_id": "system", 00:24:17.834 "dma_device_type": 1 00:24:17.834 } 00:24:17.834 ], 00:24:17.834 "driver_specific": { 00:24:17.834 "nvme": [ 00:24:17.834 { 00:24:17.834 "trid": { 00:24:17.834 "trtype": "TCP", 00:24:17.834 "adrfam": "IPv4", 00:24:17.834 "traddr": "10.0.0.2", 00:24:17.834 "trsvcid": "4420", 00:24:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.834 }, 00:24:17.834 "ctrlr_data": { 00:24:17.834 "cntlid": 2, 00:24:17.834 "vendor_id": "0x8086", 00:24:17.834 "model_number": "SPDK bdev Controller", 00:24:17.834 "serial_number": "00000000000000000000", 00:24:17.834 "firmware_revision": "24.09", 00:24:17.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.834 "oacs": { 00:24:17.834 "security": 0, 00:24:17.834 "format": 0, 00:24:17.834 "firmware": 0, 00:24:17.834 "ns_manage": 0 00:24:17.834 }, 00:24:17.834 "multi_ctrlr": true, 00:24:17.834 "ana_reporting": false 00:24:17.834 }, 00:24:17.834 "vs": { 00:24:17.834 "nvme_version": "1.3" 00:24:17.834 }, 00:24:17.834 "ns_data": { 00:24:17.834 "id": 1, 00:24:17.834 "can_share": true 00:24:17.834 } 00:24:17.834 } 00:24:17.834 ], 00:24:17.834 "mp_policy": "active_passive" 00:24:17.834 } 00:24:17.834 } 00:24:17.834 ] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3v3Uap4OGq 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3v3Uap4OGq 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.834 [2024-07-26 11:32:13.311637] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.834 [2024-07-26 11:32:13.311809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3v3Uap4OGq 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.834 [2024-07-26 11:32:13.319649] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3v3Uap4OGq 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.834 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.835 [2024-07-26 11:32:13.327684] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.835 [2024-07-26 11:32:13.327757] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:17.835 nvme0n1 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.835 [ 00:24:17.835 { 00:24:17.835 "name": "nvme0n1", 00:24:17.835 "aliases": [ 00:24:17.835 "2a113df1-dfa1-4a7f-ab89-f4ea0b06b095" 00:24:17.835 ], 00:24:17.835 "product_name": "NVMe disk", 00:24:17.835 "block_size": 512, 00:24:17.835 "num_blocks": 2097152, 00:24:17.835 "uuid": "2a113df1-dfa1-4a7f-ab89-f4ea0b06b095", 00:24:17.835 "assigned_rate_limits": { 00:24:17.835 "rw_ios_per_sec": 0, 00:24:17.835 "rw_mbytes_per_sec": 0, 00:24:17.835 "r_mbytes_per_sec": 0, 00:24:17.835 "w_mbytes_per_sec": 0 00:24:17.835 }, 00:24:17.835 "claimed": false, 00:24:17.835 "zoned": false, 00:24:17.835 "supported_io_types": { 00:24:17.835 "read": true, 00:24:17.835 "write": true, 00:24:17.835 "unmap": false, 00:24:17.835 "flush": true, 00:24:17.835 "reset": true, 00:24:17.835 "nvme_admin": true, 00:24:17.835 "nvme_io": true, 00:24:17.835 "nvme_io_md": false, 00:24:17.835 "write_zeroes": true, 00:24:17.835 "zcopy": false, 00:24:17.835 "get_zone_info": false, 00:24:17.835 "zone_management": false, 00:24:17.835 "zone_append": false, 00:24:17.835 "compare": true, 00:24:17.835 "compare_and_write": true, 00:24:17.835 "abort": true, 00:24:17.835 "seek_hole": false, 00:24:17.835 "seek_data": false, 00:24:17.835 "copy": true, 00:24:17.835 "nvme_iov_md": false 00:24:17.835 }, 00:24:17.835 "memory_domains": [ 00:24:17.835 { 00:24:17.835 "dma_device_id": "system", 00:24:17.835 "dma_device_type": 1 00:24:17.835 } 00:24:17.835 ], 00:24:17.835 "driver_specific": { 00:24:17.835 "nvme": [ 00:24:17.835 { 00:24:17.835 "trid": { 00:24:17.835 "trtype": "TCP", 00:24:17.835 "adrfam": "IPv4", 00:24:17.835 "traddr": "10.0.0.2", 00:24:17.835 "trsvcid": "4421", 00:24:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.835 }, 00:24:17.835 "ctrlr_data": { 00:24:17.835 "cntlid": 3, 00:24:17.835 "vendor_id": "0x8086", 00:24:17.835 "model_number": "SPDK bdev Controller", 00:24:17.835 "serial_number": "00000000000000000000", 00:24:17.835 "firmware_revision": "24.09", 00:24:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.835 "oacs": { 00:24:17.835 "security": 0, 00:24:17.835 "format": 0, 00:24:17.835 "firmware": 0, 00:24:17.835 "ns_manage": 0 00:24:17.835 }, 00:24:17.835 "multi_ctrlr": true, 00:24:17.835 "ana_reporting": false 00:24:17.835 }, 00:24:17.835 "vs": { 00:24:17.835 "nvme_version": "1.3" 00:24:17.835 }, 00:24:17.835 "ns_data": { 00:24:17.835 "id": 1, 00:24:17.835 "can_share": true 00:24:17.835 } 00:24:17.835 } 00:24:17.835 ], 00:24:17.835 "mp_policy": "active_passive" 00:24:17.835 } 00:24:17.835 } 00:24:17.835 ] 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3v3Uap4OGq 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.835 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.835 rmmod nvme_tcp 00:24:17.835 rmmod nvme_fabrics 00:24:17.835 rmmod nvme_keyring 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2177581 ']' 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2177581 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2177581 ']' 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2177581 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2177581 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2177581' 00:24:18.092 killing process with pid 2177581 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2177581 00:24:18.092 [2024-07-26 11:32:13.532520] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:18.092 [2024-07-26 11:32:13.532558] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:18.092 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2177581 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.351 11:32:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.281 11:32:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.282 00:24:20.282 real 0m6.192s 00:24:20.282 user 0m2.296s 00:24:20.282 sys 0m2.347s 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 ************************************ 00:24:20.282 END TEST nvmf_async_init 00:24:20.282 ************************************ 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 ************************************ 00:24:20.282 START TEST dma 00:24:20.282 ************************************ 00:24:20.282 11:32:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:20.542 * Looking for test storage... 00:24:20.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.542 11:32:15 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.542 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:20.543 00:24:20.543 real 0m0.098s 00:24:20.543 user 0m0.044s 00:24:20.543 sys 0m0.059s 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:20.543 ************************************ 00:24:20.543 END TEST dma 00:24:20.543 ************************************ 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.543 ************************************ 00:24:20.543 START TEST nvmf_identify 00:24:20.543 ************************************ 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:20.543 * Looking for test storage... 00:24:20.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.543 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.544 11:32:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:23.078 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:23.078 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:23.078 Found net devices under 0000:84:00.0: cvl_0_0 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:23.078 Found net devices under 0000:84:00.1: cvl_0_1 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.078 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:24:23.337 00:24:23.337 --- 10.0.0.2 ping statistics --- 00:24:23.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.337 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:24:23.337 00:24:23.337 --- 10.0.0.1 ping statistics --- 00:24:23.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.337 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2179732 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2179732 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2179732 ']' 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.337 11:32:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.337 [2024-07-26 11:32:18.919146] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:23.337 [2024-07-26 11:32:18.919254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.337 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.595 [2024-07-26 11:32:19.003327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.595 [2024-07-26 11:32:19.130529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.595 [2024-07-26 11:32:19.130602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.595 [2024-07-26 11:32:19.130618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.595 [2024-07-26 11:32:19.130632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.595 [2024-07-26 11:32:19.130643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.595 [2024-07-26 11:32:19.130750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.595 [2024-07-26 11:32:19.130806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.595 [2024-07-26 11:32:19.130878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.595 [2024-07-26 11:32:19.130882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 [2024-07-26 11:32:19.330314] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 Malloc0 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 [2024-07-26 11:32:19.409033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.855 [ 00:24:23.855 { 00:24:23.855 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:23.855 "subtype": "Discovery", 00:24:23.855 "listen_addresses": [ 00:24:23.855 { 00:24:23.855 "trtype": "TCP", 00:24:23.855 "adrfam": "IPv4", 00:24:23.855 "traddr": "10.0.0.2", 00:24:23.855 "trsvcid": "4420" 00:24:23.855 } 00:24:23.855 ], 00:24:23.855 "allow_any_host": true, 00:24:23.855 "hosts": [] 00:24:23.855 }, 00:24:23.855 { 00:24:23.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.855 "subtype": "NVMe", 00:24:23.855 "listen_addresses": [ 00:24:23.855 { 00:24:23.855 "trtype": "TCP", 00:24:23.855 "adrfam": "IPv4", 00:24:23.855 "traddr": "10.0.0.2", 00:24:23.855 "trsvcid": "4420" 00:24:23.855 } 00:24:23.855 ], 00:24:23.855 "allow_any_host": true, 00:24:23.855 "hosts": [], 00:24:23.855 "serial_number": "SPDK00000000000001", 00:24:23.855 "model_number": "SPDK bdev Controller", 00:24:23.855 "max_namespaces": 32, 00:24:23.855 "min_cntlid": 1, 00:24:23.855 "max_cntlid": 65519, 00:24:23.855 "namespaces": [ 00:24:23.855 { 00:24:23.855 "nsid": 1, 00:24:23.855 "bdev_name": "Malloc0", 00:24:23.855 "name": "Malloc0", 00:24:23.855 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:23.855 "eui64": "ABCDEF0123456789", 00:24:23.855 "uuid": "f66c5b40-eeb1-4326-a4db-c624a641299b" 00:24:23.855 } 00:24:23.855 ] 00:24:23.855 } 00:24:23.855 ] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.855 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:23.855 [2024-07-26 11:32:19.452804] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:23.855 [2024-07-26 11:32:19.452859] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179879 ] 00:24:23.855 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.855 [2024-07-26 11:32:19.493118] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:23.855 [2024-07-26 11:32:19.493190] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:23.855 [2024-07-26 11:32:19.493202] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:23.855 [2024-07-26 11:32:19.493219] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:23.855 [2024-07-26 11:32:19.493235] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:23.855 [2024-07-26 11:32:19.493627] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:23.855 [2024-07-26 11:32:19.493688] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x177c540 0 00:24:23.855 [2024-07-26 11:32:19.504438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:23.855 [2024-07-26 11:32:19.504467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:23.855 [2024-07-26 11:32:19.504478] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:23.855 [2024-07-26 11:32:19.504485] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:23.855 [2024-07-26 11:32:19.504544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.855 [2024-07-26 11:32:19.504558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.855 [2024-07-26 11:32:19.504566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.855 [2024-07-26 11:32:19.504586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:23.856 [2024-07-26 11:32:19.504617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.512444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.512464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.512480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.512490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.512507] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:23.856 [2024-07-26 11:32:19.512520] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:23.856 [2024-07-26 11:32:19.512530] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:23.856 [2024-07-26 11:32:19.512555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.512564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.512572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.512584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.512612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.512851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.512868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.512876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.512883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.512898] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:23.856 [2024-07-26 11:32:19.512914] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:23.856 [2024-07-26 11:32:19.512928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.512936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.512943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.512955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.512980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.513171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.513184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.513192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.513208] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:23.856 [2024-07-26 11:32:19.513223] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:23.856 [2024-07-26 11:32:19.513237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.513264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.513287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.513517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.513534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.513547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.513565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:23.856 [2024-07-26 11:32:19.513584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.513613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.513637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.513806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.513823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.513830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.513838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.513847] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:23.856 [2024-07-26 11:32:19.513856] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:23.856 [2024-07-26 11:32:19.513871] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:23.856 [2024-07-26 11:32:19.513983] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:23.856 [2024-07-26 11:32:19.513992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:23.856 [2024-07-26 11:32:19.514007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.514034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.514058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.514282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.514295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.514302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.514319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:23.856 [2024-07-26 11:32:19.514336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.514365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.514388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.514553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.856 [2024-07-26 11:32:19.514571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.856 [2024-07-26 11:32:19.514578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:23.856 [2024-07-26 11:32:19.514594] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:23.856 [2024-07-26 11:32:19.514604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:23.856 [2024-07-26 11:32:19.514619] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:23.856 [2024-07-26 11:32:19.514640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:23.856 [2024-07-26 11:32:19.514657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:23.856 [2024-07-26 11:32:19.514678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.856 [2024-07-26 11:32:19.514703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:23.856 [2024-07-26 11:32:19.514913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.856 [2024-07-26 11:32:19.514931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.856 [2024-07-26 11:32:19.514938] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514946] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c540): datao=0, datal=4096, cccid=0 00:24:23.856 [2024-07-26 11:32:19.514954] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dc3c0) on tqpair(0x177c540): expected_datao=0, payload_size=4096 00:24:23.856 [2024-07-26 11:32:19.514963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514984] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.856 [2024-07-26 11:32:19.514996] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.555628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.118 [2024-07-26 11:32:19.555648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.118 [2024-07-26 11:32:19.555657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.555665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:24.118 [2024-07-26 11:32:19.555678] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:24.118 [2024-07-26 11:32:19.555688] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:24.118 [2024-07-26 11:32:19.555696] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:24.118 [2024-07-26 11:32:19.555706] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:24.118 [2024-07-26 11:32:19.555715] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:24.118 [2024-07-26 11:32:19.555724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:24.118 [2024-07-26 11:32:19.555740] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:24.118 [2024-07-26 11:32:19.555760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.555770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.555782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.555795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.118 [2024-07-26 11:32:19.555820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:24.118 [2024-07-26 11:32:19.555951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.118 [2024-07-26 11:32:19.555969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.118 [2024-07-26 11:32:19.555976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.555984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:24.118 [2024-07-26 11:32:19.555996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.556024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.118 [2024-07-26 11:32:19.556035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.556059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.118 [2024-07-26 11:32:19.556070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.556094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.118 [2024-07-26 11:32:19.556105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.556129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.118 [2024-07-26 11:32:19.556139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:24.118 [2024-07-26 11:32:19.556161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:24.118 [2024-07-26 11:32:19.556177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.556185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.556197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.118 [2024-07-26 11:32:19.556223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc3c0, cid 0, qid 0 00:24:24.118 [2024-07-26 11:32:19.556235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc540, cid 1, qid 0 00:24:24.118 [2024-07-26 11:32:19.556243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc6c0, cid 2, qid 0 00:24:24.118 [2024-07-26 11:32:19.556252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.118 [2024-07-26 11:32:19.556260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc9c0, cid 4, qid 0 00:24:24.118 [2024-07-26 11:32:19.560443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.118 [2024-07-26 11:32:19.560461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.118 [2024-07-26 11:32:19.560469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.560476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc9c0) on tqpair=0x177c540 00:24:24.118 [2024-07-26 11:32:19.560486] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:24.118 [2024-07-26 11:32:19.560496] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:24.118 [2024-07-26 11:32:19.560517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.560528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.560540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.118 [2024-07-26 11:32:19.560565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc9c0, cid 4, qid 0 00:24:24.118 [2024-07-26 11:32:19.560775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.118 [2024-07-26 11:32:19.560792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.118 [2024-07-26 11:32:19.560799] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.560807] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c540): datao=0, datal=4096, cccid=4 00:24:24.118 [2024-07-26 11:32:19.560815] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dc9c0) on tqpair(0x177c540): expected_datao=0, payload_size=4096 00:24:24.118 [2024-07-26 11:32:19.560823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.560871] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.560882] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.561001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.118 [2024-07-26 11:32:19.561018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.118 [2024-07-26 11:32:19.561026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.561033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc9c0) on tqpair=0x177c540 00:24:24.118 [2024-07-26 11:32:19.561054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:24.118 [2024-07-26 11:32:19.561095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.561108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.561120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.118 [2024-07-26 11:32:19.561132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.561140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.118 [2024-07-26 11:32:19.561147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177c540) 00:24:24.118 [2024-07-26 11:32:19.561157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.118 [2024-07-26 11:32:19.561194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc9c0, cid 4, qid 0 00:24:24.118 [2024-07-26 11:32:19.561207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dcb40, cid 5, qid 0 00:24:24.118 [2024-07-26 11:32:19.561465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.118 [2024-07-26 11:32:19.561483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.118 [2024-07-26 11:32:19.561490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.561502] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c540): datao=0, datal=1024, cccid=4 00:24:24.119 [2024-07-26 11:32:19.561511] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dc9c0) on tqpair(0x177c540): expected_datao=0, payload_size=1024 00:24:24.119 [2024-07-26 11:32:19.561520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.561530] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.561539] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.561548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.119 [2024-07-26 11:32:19.561558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.119 [2024-07-26 11:32:19.561565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.561572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dcb40) on tqpair=0x177c540 00:24:24.119 [2024-07-26 11:32:19.602622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.119 [2024-07-26 11:32:19.602643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.119 [2024-07-26 11:32:19.602652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.602660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc9c0) on tqpair=0x177c540 00:24:24.119 [2024-07-26 11:32:19.602680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.602690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c540) 00:24:24.119 [2024-07-26 11:32:19.602703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.119 [2024-07-26 11:32:19.602737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc9c0, cid 4, qid 0 00:24:24.119 [2024-07-26 11:32:19.602891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.119 [2024-07-26 11:32:19.602908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.119 [2024-07-26 11:32:19.602915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.602922] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c540): datao=0, datal=3072, cccid=4 00:24:24.119 [2024-07-26 11:32:19.602931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dc9c0) on tqpair(0x177c540): expected_datao=0, payload_size=3072 00:24:24.119 [2024-07-26 11:32:19.602939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.602950] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.602958] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.603007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.119 [2024-07-26 11:32:19.603019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.119 [2024-07-26 11:32:19.603027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.603034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc9c0) on tqpair=0x177c540 00:24:24.119 [2024-07-26 11:32:19.603050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.603060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c540) 00:24:24.119 [2024-07-26 11:32:19.603072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.119 [2024-07-26 11:32:19.603104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc9c0, cid 4, qid 0 00:24:24.119 [2024-07-26 11:32:19.603262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.119 [2024-07-26 11:32:19.603278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.119 [2024-07-26 11:32:19.603286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.603293] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c540): datao=0, datal=8, cccid=4 00:24:24.119 [2024-07-26 11:32:19.603307] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dc9c0) on tqpair(0x177c540): expected_datao=0, payload_size=8 00:24:24.119 [2024-07-26 11:32:19.603315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.603326] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.603334] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.647449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.119 [2024-07-26 11:32:19.647469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.119 [2024-07-26 11:32:19.647477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.119 [2024-07-26 11:32:19.647485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc9c0) on tqpair=0x177c540 00:24:24.119 ===================================================== 00:24:24.119 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:24.119 ===================================================== 00:24:24.119 Controller Capabilities/Features 00:24:24.119 ================================ 00:24:24.119 Vendor ID: 0000 00:24:24.119 Subsystem Vendor ID: 0000 00:24:24.119 Serial Number: .................... 00:24:24.119 Model Number: ........................................ 00:24:24.119 Firmware Version: 24.09 00:24:24.119 Recommended Arb Burst: 0 00:24:24.119 IEEE OUI Identifier: 00 00 00 00:24:24.119 Multi-path I/O 00:24:24.119 May have multiple subsystem ports: No 00:24:24.119 May have multiple controllers: No 00:24:24.119 Associated with SR-IOV VF: No 00:24:24.119 Max Data Transfer Size: 131072 00:24:24.119 Max Number of Namespaces: 0 00:24:24.119 Max Number of I/O Queues: 1024 00:24:24.119 NVMe Specification Version (VS): 1.3 00:24:24.119 NVMe Specification Version (Identify): 1.3 00:24:24.119 Maximum Queue Entries: 128 00:24:24.119 Contiguous Queues Required: Yes 00:24:24.119 Arbitration Mechanisms Supported 00:24:24.119 Weighted Round Robin: Not Supported 00:24:24.119 Vendor Specific: Not Supported 00:24:24.119 Reset Timeout: 15000 ms 00:24:24.119 Doorbell Stride: 4 bytes 00:24:24.119 NVM Subsystem Reset: Not Supported 00:24:24.119 Command Sets Supported 00:24:24.119 NVM Command Set: Supported 00:24:24.119 Boot Partition: Not Supported 00:24:24.119 Memory Page Size Minimum: 4096 bytes 00:24:24.119 Memory Page Size Maximum: 4096 bytes 00:24:24.119 Persistent Memory Region: Not Supported 00:24:24.119 Optional Asynchronous Events Supported 00:24:24.119 Namespace Attribute Notices: Not Supported 00:24:24.119 Firmware Activation Notices: Not Supported 00:24:24.119 ANA Change Notices: Not Supported 00:24:24.119 PLE Aggregate Log Change Notices: Not Supported 00:24:24.119 LBA Status Info Alert Notices: Not Supported 00:24:24.119 EGE Aggregate Log Change Notices: Not Supported 00:24:24.119 Normal NVM Subsystem Shutdown event: Not Supported 00:24:24.119 Zone Descriptor Change Notices: Not Supported 00:24:24.119 Discovery Log Change Notices: Supported 00:24:24.119 Controller Attributes 00:24:24.119 128-bit Host Identifier: Not Supported 00:24:24.119 Non-Operational Permissive Mode: Not Supported 00:24:24.119 NVM Sets: Not Supported 00:24:24.119 Read Recovery Levels: Not Supported 00:24:24.119 Endurance Groups: Not Supported 00:24:24.119 Predictable Latency Mode: Not Supported 00:24:24.119 Traffic Based Keep ALive: Not Supported 00:24:24.119 Namespace Granularity: Not Supported 00:24:24.119 SQ Associations: Not Supported 00:24:24.119 UUID List: Not Supported 00:24:24.119 Multi-Domain Subsystem: Not Supported 00:24:24.119 Fixed Capacity Management: Not Supported 00:24:24.119 Variable Capacity Management: Not Supported 00:24:24.119 Delete Endurance Group: Not Supported 00:24:24.119 Delete NVM Set: Not Supported 00:24:24.119 Extended LBA Formats Supported: Not Supported 00:24:24.119 Flexible Data Placement Supported: Not Supported 00:24:24.119 00:24:24.119 Controller Memory Buffer Support 00:24:24.119 ================================ 00:24:24.119 Supported: No 00:24:24.119 00:24:24.119 Persistent Memory Region Support 00:24:24.119 ================================ 00:24:24.119 Supported: No 00:24:24.119 00:24:24.119 Admin Command Set Attributes 00:24:24.119 ============================ 00:24:24.119 Security Send/Receive: Not Supported 00:24:24.119 Format NVM: Not Supported 00:24:24.119 Firmware Activate/Download: Not Supported 00:24:24.119 Namespace Management: Not Supported 00:24:24.119 Device Self-Test: Not Supported 00:24:24.119 Directives: Not Supported 00:24:24.119 NVMe-MI: Not Supported 00:24:24.119 Virtualization Management: Not Supported 00:24:24.119 Doorbell Buffer Config: Not Supported 00:24:24.119 Get LBA Status Capability: Not Supported 00:24:24.119 Command & Feature Lockdown Capability: Not Supported 00:24:24.119 Abort Command Limit: 1 00:24:24.119 Async Event Request Limit: 4 00:24:24.119 Number of Firmware Slots: N/A 00:24:24.119 Firmware Slot 1 Read-Only: N/A 00:24:24.119 Firmware Activation Without Reset: N/A 00:24:24.119 Multiple Update Detection Support: N/A 00:24:24.119 Firmware Update Granularity: No Information Provided 00:24:24.119 Per-Namespace SMART Log: No 00:24:24.119 Asymmetric Namespace Access Log Page: Not Supported 00:24:24.119 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:24.119 Command Effects Log Page: Not Supported 00:24:24.119 Get Log Page Extended Data: Supported 00:24:24.119 Telemetry Log Pages: Not Supported 00:24:24.119 Persistent Event Log Pages: Not Supported 00:24:24.119 Supported Log Pages Log Page: May Support 00:24:24.119 Commands Supported & Effects Log Page: Not Supported 00:24:24.119 Feature Identifiers & Effects Log Page:May Support 00:24:24.119 NVMe-MI Commands & Effects Log Page: May Support 00:24:24.119 Data Area 4 for Telemetry Log: Not Supported 00:24:24.119 Error Log Page Entries Supported: 128 00:24:24.120 Keep Alive: Not Supported 00:24:24.120 00:24:24.120 NVM Command Set Attributes 00:24:24.120 ========================== 00:24:24.120 Submission Queue Entry Size 00:24:24.120 Max: 1 00:24:24.120 Min: 1 00:24:24.120 Completion Queue Entry Size 00:24:24.120 Max: 1 00:24:24.120 Min: 1 00:24:24.120 Number of Namespaces: 0 00:24:24.120 Compare Command: Not Supported 00:24:24.120 Write Uncorrectable Command: Not Supported 00:24:24.120 Dataset Management Command: Not Supported 00:24:24.120 Write Zeroes Command: Not Supported 00:24:24.120 Set Features Save Field: Not Supported 00:24:24.120 Reservations: Not Supported 00:24:24.120 Timestamp: Not Supported 00:24:24.120 Copy: Not Supported 00:24:24.120 Volatile Write Cache: Not Present 00:24:24.120 Atomic Write Unit (Normal): 1 00:24:24.120 Atomic Write Unit (PFail): 1 00:24:24.120 Atomic Compare & Write Unit: 1 00:24:24.120 Fused Compare & Write: Supported 00:24:24.120 Scatter-Gather List 00:24:24.120 SGL Command Set: Supported 00:24:24.120 SGL Keyed: Supported 00:24:24.120 SGL Bit Bucket Descriptor: Not Supported 00:24:24.120 SGL Metadata Pointer: Not Supported 00:24:24.120 Oversized SGL: Not Supported 00:24:24.120 SGL Metadata Address: Not Supported 00:24:24.120 SGL Offset: Supported 00:24:24.120 Transport SGL Data Block: Not Supported 00:24:24.120 Replay Protected Memory Block: Not Supported 00:24:24.120 00:24:24.120 Firmware Slot Information 00:24:24.120 ========================= 00:24:24.120 Active slot: 0 00:24:24.120 00:24:24.120 00:24:24.120 Error Log 00:24:24.120 ========= 00:24:24.120 00:24:24.120 Active Namespaces 00:24:24.120 ================= 00:24:24.120 Discovery Log Page 00:24:24.120 ================== 00:24:24.120 Generation Counter: 2 00:24:24.120 Number of Records: 2 00:24:24.120 Record Format: 0 00:24:24.120 00:24:24.120 Discovery Log Entry 0 00:24:24.120 ---------------------- 00:24:24.120 Transport Type: 3 (TCP) 00:24:24.120 Address Family: 1 (IPv4) 00:24:24.120 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:24.120 Entry Flags: 00:24:24.120 Duplicate Returned Information: 1 00:24:24.120 Explicit Persistent Connection Support for Discovery: 1 00:24:24.120 Transport Requirements: 00:24:24.120 Secure Channel: Not Required 00:24:24.120 Port ID: 0 (0x0000) 00:24:24.120 Controller ID: 65535 (0xffff) 00:24:24.120 Admin Max SQ Size: 128 00:24:24.120 Transport Service Identifier: 4420 00:24:24.120 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:24.120 Transport Address: 10.0.0.2 00:24:24.120 Discovery Log Entry 1 00:24:24.120 ---------------------- 00:24:24.120 Transport Type: 3 (TCP) 00:24:24.120 Address Family: 1 (IPv4) 00:24:24.120 Subsystem Type: 2 (NVM Subsystem) 00:24:24.120 Entry Flags: 00:24:24.120 Duplicate Returned Information: 0 00:24:24.120 Explicit Persistent Connection Support for Discovery: 0 00:24:24.120 Transport Requirements: 00:24:24.120 Secure Channel: Not Required 00:24:24.120 Port ID: 0 (0x0000) 00:24:24.120 Controller ID: 65535 (0xffff) 00:24:24.120 Admin Max SQ Size: 128 00:24:24.120 Transport Service Identifier: 4420 00:24:24.120 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:24.120 Transport Address: 10.0.0.2 [2024-07-26 11:32:19.647625] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:24.120 [2024-07-26 11:32:19.647649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc3c0) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.647662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.120 [2024-07-26 11:32:19.647672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc540) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.647681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.120 [2024-07-26 11:32:19.647690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc6c0) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.647698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.120 [2024-07-26 11:32:19.647707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.647716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.120 [2024-07-26 11:32:19.647736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.647746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.647753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.120 [2024-07-26 11:32:19.647766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.120 [2024-07-26 11:32:19.647794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.120 [2024-07-26 11:32:19.647989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.120 [2024-07-26 11:32:19.648006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.120 [2024-07-26 11:32:19.648013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.648034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.120 [2024-07-26 11:32:19.648061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.120 [2024-07-26 11:32:19.648092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.120 [2024-07-26 11:32:19.648244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.120 [2024-07-26 11:32:19.648260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.120 [2024-07-26 11:32:19.648268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.648290] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:24.120 [2024-07-26 11:32:19.648300] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:24.120 [2024-07-26 11:32:19.648318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.120 [2024-07-26 11:32:19.648348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.120 [2024-07-26 11:32:19.648372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.120 [2024-07-26 11:32:19.648580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.120 [2024-07-26 11:32:19.648595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.120 [2024-07-26 11:32:19.648602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.648628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.120 [2024-07-26 11:32:19.648658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.120 [2024-07-26 11:32:19.648682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.120 [2024-07-26 11:32:19.648848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.120 [2024-07-26 11:32:19.648865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.120 [2024-07-26 11:32:19.648872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.648898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.648916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.120 [2024-07-26 11:32:19.648928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.120 [2024-07-26 11:32:19.648951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.120 [2024-07-26 11:32:19.649131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.120 [2024-07-26 11:32:19.649148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.120 [2024-07-26 11:32:19.649155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.649163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.120 [2024-07-26 11:32:19.649181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.649192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.120 [2024-07-26 11:32:19.649199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.649210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.649234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.649398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.649419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.649438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.649447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.649467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.649478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.649485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.649497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.649521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.649716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.649730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.649737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.649744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.649762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.649772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.649779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.649791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.649814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.649974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.649990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.649998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.650024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.650053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.650077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.650233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.650246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.650253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.650278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.650308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.650330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.650545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.650560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.650572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.650599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.650628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.650652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.650813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.650830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.650837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.650863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.650881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.650893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.650916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.651073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.651086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.651093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.651100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.651118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.651128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.651136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.651147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.651170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.651334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.651351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.651358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.651365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.651383] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.651394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.651401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c540) 00:24:24.121 [2024-07-26 11:32:19.651413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.121 [2024-07-26 11:32:19.655446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dc840, cid 3, qid 0 00:24:24.121 [2024-07-26 11:32:19.655681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.655695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.655702] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.655715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dc840) on tqpair=0x177c540 00:24:24.121 [2024-07-26 11:32:19.655731] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:24.121 00:24:24.121 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:24.121 [2024-07-26 11:32:19.693118] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:24.121 [2024-07-26 11:32:19.693168] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179881 ] 00:24:24.121 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.121 [2024-07-26 11:32:19.728502] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:24.121 [2024-07-26 11:32:19.728563] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:24.121 [2024-07-26 11:32:19.728574] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:24.121 [2024-07-26 11:32:19.728589] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:24.121 [2024-07-26 11:32:19.728603] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:24.121 [2024-07-26 11:32:19.732486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:24.121 [2024-07-26 11:32:19.732527] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11cd540 0 00:24:24.121 [2024-07-26 11:32:19.739437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:24.121 [2024-07-26 11:32:19.739464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:24.121 [2024-07-26 11:32:19.739474] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:24.121 [2024-07-26 11:32:19.739481] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:24.121 [2024-07-26 11:32:19.739524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.739537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.121 [2024-07-26 11:32:19.739544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.121 [2024-07-26 11:32:19.739560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:24.121 [2024-07-26 11:32:19.739592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.121 [2024-07-26 11:32:19.747443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.121 [2024-07-26 11:32:19.747462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.121 [2024-07-26 11:32:19.747470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.747478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.747493] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:24.122 [2024-07-26 11:32:19.747505] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:24.122 [2024-07-26 11:32:19.747515] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:24.122 [2024-07-26 11:32:19.747538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.747548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.747559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.747572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.747598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.122 [2024-07-26 11:32:19.747782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.122 [2024-07-26 11:32:19.747799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.122 [2024-07-26 11:32:19.747806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.747814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.747827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:24.122 [2024-07-26 11:32:19.747844] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:24.122 [2024-07-26 11:32:19.747857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.747865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.747872] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.747884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.747908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.122 [2024-07-26 11:32:19.748081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.122 [2024-07-26 11:32:19.748098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.122 [2024-07-26 11:32:19.748105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.748121] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:24.122 [2024-07-26 11:32:19.748137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:24.122 [2024-07-26 11:32:19.748151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.748178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.748201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.122 [2024-07-26 11:32:19.748375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.122 [2024-07-26 11:32:19.748388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.122 [2024-07-26 11:32:19.748396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.748412] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:24.122 [2024-07-26 11:32:19.748437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.748468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.748492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.122 [2024-07-26 11:32:19.748624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.122 [2024-07-26 11:32:19.748641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.122 [2024-07-26 11:32:19.748649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.748664] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:24.122 [2024-07-26 11:32:19.748673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:24.122 [2024-07-26 11:32:19.748689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:24.122 [2024-07-26 11:32:19.748800] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:24.122 [2024-07-26 11:32:19.748807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:24.122 [2024-07-26 11:32:19.748821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.748837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.748848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.748872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.122 [2024-07-26 11:32:19.749034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.122 [2024-07-26 11:32:19.749051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.122 [2024-07-26 11:32:19.749058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.749065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.749074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:24.122 [2024-07-26 11:32:19.749093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.749103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.749110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.749122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.749145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.122 [2024-07-26 11:32:19.749307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.122 [2024-07-26 11:32:19.749320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.122 [2024-07-26 11:32:19.749327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.749334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.122 [2024-07-26 11:32:19.749342] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:24.122 [2024-07-26 11:32:19.749352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:24.122 [2024-07-26 11:32:19.749366] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:24.122 [2024-07-26 11:32:19.749382] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:24.122 [2024-07-26 11:32:19.749400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.122 [2024-07-26 11:32:19.749410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.122 [2024-07-26 11:32:19.749422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.122 [2024-07-26 11:32:19.749454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.123 [2024-07-26 11:32:19.749623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.123 [2024-07-26 11:32:19.749637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.123 [2024-07-26 11:32:19.749644] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749651] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=4096, cccid=0 00:24:24.123 [2024-07-26 11:32:19.749659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122d3c0) on tqpair(0x11cd540): expected_datao=0, payload_size=4096 00:24:24.123 [2024-07-26 11:32:19.749667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749687] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749697] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.123 [2024-07-26 11:32:19.749840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.123 [2024-07-26 11:32:19.749847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.123 [2024-07-26 11:32:19.749866] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:24.123 [2024-07-26 11:32:19.749875] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:24.123 [2024-07-26 11:32:19.749884] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:24.123 [2024-07-26 11:32:19.749891] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:24.123 [2024-07-26 11:32:19.749899] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:24.123 [2024-07-26 11:32:19.749908] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.749923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.749941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.749958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.749970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.123 [2024-07-26 11:32:19.749993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.123 [2024-07-26 11:32:19.750168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.123 [2024-07-26 11:32:19.750181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.123 [2024-07-26 11:32:19.750188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.123 [2024-07-26 11:32:19.750206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.750236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.123 [2024-07-26 11:32:19.750248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.750272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.123 [2024-07-26 11:32:19.750283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.750306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.123 [2024-07-26 11:32:19.750317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.750341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.123 [2024-07-26 11:32:19.750350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.750371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.750385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.750405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.123 [2024-07-26 11:32:19.750440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d3c0, cid 0, qid 0 00:24:24.123 [2024-07-26 11:32:19.750454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d540, cid 1, qid 0 00:24:24.123 [2024-07-26 11:32:19.750463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d6c0, cid 2, qid 0 00:24:24.123 [2024-07-26 11:32:19.750471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.123 [2024-07-26 11:32:19.750479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.123 [2024-07-26 11:32:19.750657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.123 [2024-07-26 11:32:19.750674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.123 [2024-07-26 11:32:19.750681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.123 [2024-07-26 11:32:19.750697] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:24.123 [2024-07-26 11:32:19.750707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.750728] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.750741] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.750756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.750773] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.750784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:24.123 [2024-07-26 11:32:19.750808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.123 [2024-07-26 11:32:19.750974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.123 [2024-07-26 11:32:19.750991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.123 [2024-07-26 11:32:19.750998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.751005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.123 [2024-07-26 11:32:19.751080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.751103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:24.123 [2024-07-26 11:32:19.751119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.751127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.123 [2024-07-26 11:32:19.751139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.123 [2024-07-26 11:32:19.751163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.123 [2024-07-26 11:32:19.751310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.123 [2024-07-26 11:32:19.751327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.123 [2024-07-26 11:32:19.751334] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.751341] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=4096, cccid=4 00:24:24.123 [2024-07-26 11:32:19.751350] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122d9c0) on tqpair(0x11cd540): expected_datao=0, payload_size=4096 00:24:24.123 [2024-07-26 11:32:19.751358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.751377] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.123 [2024-07-26 11:32:19.751386] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.794442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.384 [2024-07-26 11:32:19.794463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.384 [2024-07-26 11:32:19.794471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.794479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.384 [2024-07-26 11:32:19.794495] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:24.384 [2024-07-26 11:32:19.794520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:24.384 [2024-07-26 11:32:19.794541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:24.384 [2024-07-26 11:32:19.794558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.794567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.384 [2024-07-26 11:32:19.794579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.384 [2024-07-26 11:32:19.794605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.384 [2024-07-26 11:32:19.794774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.384 [2024-07-26 11:32:19.794791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.384 [2024-07-26 11:32:19.794798] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.794806] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=4096, cccid=4 00:24:24.384 [2024-07-26 11:32:19.794814] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122d9c0) on tqpair(0x11cd540): expected_datao=0, payload_size=4096 00:24:24.384 [2024-07-26 11:32:19.794823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.794843] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.794853] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.835567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.384 [2024-07-26 11:32:19.835586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.384 [2024-07-26 11:32:19.835595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.835602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.384 [2024-07-26 11:32:19.835627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:24.384 [2024-07-26 11:32:19.835649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:24.384 [2024-07-26 11:32:19.835666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.835675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.384 [2024-07-26 11:32:19.835687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.384 [2024-07-26 11:32:19.835713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.384 [2024-07-26 11:32:19.835863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.384 [2024-07-26 11:32:19.835880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.384 [2024-07-26 11:32:19.835887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.384 [2024-07-26 11:32:19.835894] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=4096, cccid=4 00:24:24.385 [2024-07-26 11:32:19.835903] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122d9c0) on tqpair(0x11cd540): expected_datao=0, payload_size=4096 00:24:24.385 [2024-07-26 11:32:19.835913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.835932] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.835942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.876578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.385 [2024-07-26 11:32:19.876600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.385 [2024-07-26 11:32:19.876608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.876616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.385 [2024-07-26 11:32:19.876631] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876717] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:24.385 [2024-07-26 11:32:19.876726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:24.385 [2024-07-26 11:32:19.876735] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:24.385 [2024-07-26 11:32:19.876757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.876767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.876780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.876792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.876800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.876807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.876818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.385 [2024-07-26 11:32:19.876848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.385 [2024-07-26 11:32:19.876861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122db40, cid 5, qid 0 00:24:24.385 [2024-07-26 11:32:19.877037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.385 [2024-07-26 11:32:19.877054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.385 [2024-07-26 11:32:19.877062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.385 [2024-07-26 11:32:19.877081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.385 [2024-07-26 11:32:19.877091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.385 [2024-07-26 11:32:19.877099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122db40) on tqpair=0x11cd540 00:24:24.385 [2024-07-26 11:32:19.877125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.877147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.877170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122db40, cid 5, qid 0 00:24:24.385 [2024-07-26 11:32:19.877351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.385 [2024-07-26 11:32:19.877368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.385 [2024-07-26 11:32:19.877375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122db40) on tqpair=0x11cd540 00:24:24.385 [2024-07-26 11:32:19.877401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877412] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.877423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.877459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122db40, cid 5, qid 0 00:24:24.385 [2024-07-26 11:32:19.877632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.385 [2024-07-26 11:32:19.877645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.385 [2024-07-26 11:32:19.877653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122db40) on tqpair=0x11cd540 00:24:24.385 [2024-07-26 11:32:19.877678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.877700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.877723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122db40, cid 5, qid 0 00:24:24.385 [2024-07-26 11:32:19.877854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.385 [2024-07-26 11:32:19.877871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.385 [2024-07-26 11:32:19.877878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122db40) on tqpair=0x11cd540 00:24:24.385 [2024-07-26 11:32:19.877914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.877938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.877952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.877972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.877985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.877993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.878004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.878017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.878025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11cd540) 00:24:24.385 [2024-07-26 11:32:19.878036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.385 [2024-07-26 11:32:19.878061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122db40, cid 5, qid 0 00:24:24.385 [2024-07-26 11:32:19.878073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d9c0, cid 4, qid 0 00:24:24.385 [2024-07-26 11:32:19.878082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122dcc0, cid 6, qid 0 00:24:24.385 [2024-07-26 11:32:19.878091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122de40, cid 7, qid 0 00:24:24.385 [2024-07-26 11:32:19.878398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.385 [2024-07-26 11:32:19.878415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.385 [2024-07-26 11:32:19.878423] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882440] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=8192, cccid=5 00:24:24.385 [2024-07-26 11:32:19.882453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122db40) on tqpair(0x11cd540): expected_datao=0, payload_size=8192 00:24:24.385 [2024-07-26 11:32:19.882467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882480] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882489] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.385 [2024-07-26 11:32:19.882509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.385 [2024-07-26 11:32:19.882517] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882524] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=512, cccid=4 00:24:24.385 [2024-07-26 11:32:19.882532] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122d9c0) on tqpair(0x11cd540): expected_datao=0, payload_size=512 00:24:24.385 [2024-07-26 11:32:19.882540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882551] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882559] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.385 [2024-07-26 11:32:19.882578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.385 [2024-07-26 11:32:19.882585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882593] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=512, cccid=6 00:24:24.385 [2024-07-26 11:32:19.882601] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122dcc0) on tqpair(0x11cd540): expected_datao=0, payload_size=512 00:24:24.385 [2024-07-26 11:32:19.882609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.385 [2024-07-26 11:32:19.882620] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882627] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:24.386 [2024-07-26 11:32:19.882647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:24.386 [2024-07-26 11:32:19.882654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882661] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cd540): datao=0, datal=4096, cccid=7 00:24:24.386 [2024-07-26 11:32:19.882670] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x122de40) on tqpair(0x11cd540): expected_datao=0, payload_size=4096 00:24:24.386 [2024-07-26 11:32:19.882678] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882689] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882697] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.386 [2024-07-26 11:32:19.882722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.386 [2024-07-26 11:32:19.882729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122db40) on tqpair=0x11cd540 00:24:24.386 [2024-07-26 11:32:19.882757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.386 [2024-07-26 11:32:19.882770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.386 [2024-07-26 11:32:19.882777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d9c0) on tqpair=0x11cd540 00:24:24.386 [2024-07-26 11:32:19.882801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.386 [2024-07-26 11:32:19.882813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.386 [2024-07-26 11:32:19.882820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122dcc0) on tqpair=0x11cd540 00:24:24.386 [2024-07-26 11:32:19.882843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.386 [2024-07-26 11:32:19.882854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.386 [2024-07-26 11:32:19.882861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.386 [2024-07-26 11:32:19.882869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122de40) on tqpair=0x11cd540 00:24:24.386 ===================================================== 00:24:24.386 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.386 ===================================================== 00:24:24.386 Controller Capabilities/Features 00:24:24.386 ================================ 00:24:24.386 Vendor ID: 8086 00:24:24.386 Subsystem Vendor ID: 8086 00:24:24.386 Serial Number: SPDK00000000000001 00:24:24.386 Model Number: SPDK bdev Controller 00:24:24.386 Firmware Version: 24.09 00:24:24.386 Recommended Arb Burst: 6 00:24:24.386 IEEE OUI Identifier: e4 d2 5c 00:24:24.386 Multi-path I/O 00:24:24.386 May have multiple subsystem ports: Yes 00:24:24.386 May have multiple controllers: Yes 00:24:24.386 Associated with SR-IOV VF: No 00:24:24.386 Max Data Transfer Size: 131072 00:24:24.386 Max Number of Namespaces: 32 00:24:24.386 Max Number of I/O Queues: 127 00:24:24.386 NVMe Specification Version (VS): 1.3 00:24:24.386 NVMe Specification Version (Identify): 1.3 00:24:24.386 Maximum Queue Entries: 128 00:24:24.386 Contiguous Queues Required: Yes 00:24:24.386 Arbitration Mechanisms Supported 00:24:24.386 Weighted Round Robin: Not Supported 00:24:24.386 Vendor Specific: Not Supported 00:24:24.386 Reset Timeout: 15000 ms 00:24:24.386 Doorbell Stride: 4 bytes 00:24:24.386 NVM Subsystem Reset: Not Supported 00:24:24.386 Command Sets Supported 00:24:24.386 NVM Command Set: Supported 00:24:24.386 Boot Partition: Not Supported 00:24:24.386 Memory Page Size Minimum: 4096 bytes 00:24:24.386 Memory Page Size Maximum: 4096 bytes 00:24:24.386 Persistent Memory Region: Not Supported 00:24:24.386 Optional Asynchronous Events Supported 00:24:24.386 Namespace Attribute Notices: Supported 00:24:24.386 Firmware Activation Notices: Not Supported 00:24:24.386 ANA Change Notices: Not Supported 00:24:24.386 PLE Aggregate Log Change Notices: Not Supported 00:24:24.386 LBA Status Info Alert Notices: Not Supported 00:24:24.386 EGE Aggregate Log Change Notices: Not Supported 00:24:24.386 Normal NVM Subsystem Shutdown event: Not Supported 00:24:24.386 Zone Descriptor Change Notices: Not Supported 00:24:24.386 Discovery Log Change Notices: Not Supported 00:24:24.386 Controller Attributes 00:24:24.386 128-bit Host Identifier: Supported 00:24:24.386 Non-Operational Permissive Mode: Not Supported 00:24:24.386 NVM Sets: Not Supported 00:24:24.386 Read Recovery Levels: Not Supported 00:24:24.386 Endurance Groups: Not Supported 00:24:24.386 Predictable Latency Mode: Not Supported 00:24:24.386 Traffic Based Keep ALive: Not Supported 00:24:24.386 Namespace Granularity: Not Supported 00:24:24.386 SQ Associations: Not Supported 00:24:24.386 UUID List: Not Supported 00:24:24.386 Multi-Domain Subsystem: Not Supported 00:24:24.386 Fixed Capacity Management: Not Supported 00:24:24.386 Variable Capacity Management: Not Supported 00:24:24.386 Delete Endurance Group: Not Supported 00:24:24.386 Delete NVM Set: Not Supported 00:24:24.386 Extended LBA Formats Supported: Not Supported 00:24:24.386 Flexible Data Placement Supported: Not Supported 00:24:24.386 00:24:24.386 Controller Memory Buffer Support 00:24:24.386 ================================ 00:24:24.386 Supported: No 00:24:24.386 00:24:24.386 Persistent Memory Region Support 00:24:24.386 ================================ 00:24:24.386 Supported: No 00:24:24.386 00:24:24.386 Admin Command Set Attributes 00:24:24.386 ============================ 00:24:24.386 Security Send/Receive: Not Supported 00:24:24.386 Format NVM: Not Supported 00:24:24.386 Firmware Activate/Download: Not Supported 00:24:24.386 Namespace Management: Not Supported 00:24:24.386 Device Self-Test: Not Supported 00:24:24.386 Directives: Not Supported 00:24:24.386 NVMe-MI: Not Supported 00:24:24.386 Virtualization Management: Not Supported 00:24:24.386 Doorbell Buffer Config: Not Supported 00:24:24.386 Get LBA Status Capability: Not Supported 00:24:24.386 Command & Feature Lockdown Capability: Not Supported 00:24:24.386 Abort Command Limit: 4 00:24:24.386 Async Event Request Limit: 4 00:24:24.386 Number of Firmware Slots: N/A 00:24:24.386 Firmware Slot 1 Read-Only: N/A 00:24:24.386 Firmware Activation Without Reset: N/A 00:24:24.386 Multiple Update Detection Support: N/A 00:24:24.386 Firmware Update Granularity: No Information Provided 00:24:24.386 Per-Namespace SMART Log: No 00:24:24.386 Asymmetric Namespace Access Log Page: Not Supported 00:24:24.386 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:24.386 Command Effects Log Page: Supported 00:24:24.386 Get Log Page Extended Data: Supported 00:24:24.386 Telemetry Log Pages: Not Supported 00:24:24.386 Persistent Event Log Pages: Not Supported 00:24:24.386 Supported Log Pages Log Page: May Support 00:24:24.386 Commands Supported & Effects Log Page: Not Supported 00:24:24.386 Feature Identifiers & Effects Log Page:May Support 00:24:24.386 NVMe-MI Commands & Effects Log Page: May Support 00:24:24.386 Data Area 4 for Telemetry Log: Not Supported 00:24:24.386 Error Log Page Entries Supported: 128 00:24:24.386 Keep Alive: Supported 00:24:24.386 Keep Alive Granularity: 10000 ms 00:24:24.386 00:24:24.386 NVM Command Set Attributes 00:24:24.386 ========================== 00:24:24.386 Submission Queue Entry Size 00:24:24.386 Max: 64 00:24:24.386 Min: 64 00:24:24.386 Completion Queue Entry Size 00:24:24.386 Max: 16 00:24:24.386 Min: 16 00:24:24.386 Number of Namespaces: 32 00:24:24.386 Compare Command: Supported 00:24:24.386 Write Uncorrectable Command: Not Supported 00:24:24.386 Dataset Management Command: Supported 00:24:24.386 Write Zeroes Command: Supported 00:24:24.386 Set Features Save Field: Not Supported 00:24:24.386 Reservations: Supported 00:24:24.386 Timestamp: Not Supported 00:24:24.386 Copy: Supported 00:24:24.386 Volatile Write Cache: Present 00:24:24.386 Atomic Write Unit (Normal): 1 00:24:24.386 Atomic Write Unit (PFail): 1 00:24:24.386 Atomic Compare & Write Unit: 1 00:24:24.386 Fused Compare & Write: Supported 00:24:24.386 Scatter-Gather List 00:24:24.386 SGL Command Set: Supported 00:24:24.386 SGL Keyed: Supported 00:24:24.386 SGL Bit Bucket Descriptor: Not Supported 00:24:24.386 SGL Metadata Pointer: Not Supported 00:24:24.386 Oversized SGL: Not Supported 00:24:24.386 SGL Metadata Address: Not Supported 00:24:24.386 SGL Offset: Supported 00:24:24.386 Transport SGL Data Block: Not Supported 00:24:24.386 Replay Protected Memory Block: Not Supported 00:24:24.386 00:24:24.386 Firmware Slot Information 00:24:24.386 ========================= 00:24:24.386 Active slot: 1 00:24:24.386 Slot 1 Firmware Revision: 24.09 00:24:24.387 00:24:24.387 00:24:24.387 Commands Supported and Effects 00:24:24.387 ============================== 00:24:24.387 Admin Commands 00:24:24.387 -------------- 00:24:24.387 Get Log Page (02h): Supported 00:24:24.387 Identify (06h): Supported 00:24:24.387 Abort (08h): Supported 00:24:24.387 Set Features (09h): Supported 00:24:24.387 Get Features (0Ah): Supported 00:24:24.387 Asynchronous Event Request (0Ch): Supported 00:24:24.387 Keep Alive (18h): Supported 00:24:24.387 I/O Commands 00:24:24.387 ------------ 00:24:24.387 Flush (00h): Supported LBA-Change 00:24:24.387 Write (01h): Supported LBA-Change 00:24:24.387 Read (02h): Supported 00:24:24.387 Compare (05h): Supported 00:24:24.387 Write Zeroes (08h): Supported LBA-Change 00:24:24.387 Dataset Management (09h): Supported LBA-Change 00:24:24.387 Copy (19h): Supported LBA-Change 00:24:24.387 00:24:24.387 Error Log 00:24:24.387 ========= 00:24:24.387 00:24:24.387 Arbitration 00:24:24.387 =========== 00:24:24.387 Arbitration Burst: 1 00:24:24.387 00:24:24.387 Power Management 00:24:24.387 ================ 00:24:24.387 Number of Power States: 1 00:24:24.387 Current Power State: Power State #0 00:24:24.387 Power State #0: 00:24:24.387 Max Power: 0.00 W 00:24:24.387 Non-Operational State: Operational 00:24:24.387 Entry Latency: Not Reported 00:24:24.387 Exit Latency: Not Reported 00:24:24.387 Relative Read Throughput: 0 00:24:24.387 Relative Read Latency: 0 00:24:24.387 Relative Write Throughput: 0 00:24:24.387 Relative Write Latency: 0 00:24:24.387 Idle Power: Not Reported 00:24:24.387 Active Power: Not Reported 00:24:24.387 Non-Operational Permissive Mode: Not Supported 00:24:24.387 00:24:24.387 Health Information 00:24:24.387 ================== 00:24:24.387 Critical Warnings: 00:24:24.387 Available Spare Space: OK 00:24:24.387 Temperature: OK 00:24:24.387 Device Reliability: OK 00:24:24.387 Read Only: No 00:24:24.387 Volatile Memory Backup: OK 00:24:24.387 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:24.387 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:24.387 Available Spare: 0% 00:24:24.387 Available Spare Threshold: 0% 00:24:24.387 Life Percentage Used:[2024-07-26 11:32:19.882999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.883025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.883051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122de40, cid 7, qid 0 00:24:24.387 [2024-07-26 11:32:19.883248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.883263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.883270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122de40) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.883326] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:24.387 [2024-07-26 11:32:19.883348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d3c0) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.883360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.387 [2024-07-26 11:32:19.883370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d540) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.883378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.387 [2024-07-26 11:32:19.883388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d6c0) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.883396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.387 [2024-07-26 11:32:19.883405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.883414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.387 [2024-07-26 11:32:19.883435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.883478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.883505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.387 [2024-07-26 11:32:19.883668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.883686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.883694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.883713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.883741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.883776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.387 [2024-07-26 11:32:19.883964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.883978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.883986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.883993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.884002] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:24.387 [2024-07-26 11:32:19.884011] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:24.387 [2024-07-26 11:32:19.884029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.884057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.884080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.387 [2024-07-26 11:32:19.884240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.884257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.884265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.884291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.884321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.884344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.387 [2024-07-26 11:32:19.884534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.884549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.884557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.884583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.884613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.884636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.387 [2024-07-26 11:32:19.884795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.884808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.884816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.884841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.884859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.387 [2024-07-26 11:32:19.884875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.387 [2024-07-26 11:32:19.884898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.387 [2024-07-26 11:32:19.885031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.387 [2024-07-26 11:32:19.885048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.387 [2024-07-26 11:32:19.885055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.387 [2024-07-26 11:32:19.885063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.387 [2024-07-26 11:32:19.885081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.388 [2024-07-26 11:32:19.885110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.388 [2024-07-26 11:32:19.885134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.388 [2024-07-26 11:32:19.885267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.388 [2024-07-26 11:32:19.885284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.388 [2024-07-26 11:32:19.885292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.388 [2024-07-26 11:32:19.885317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.388 [2024-07-26 11:32:19.885347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.388 [2024-07-26 11:32:19.885370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.388 [2024-07-26 11:32:19.885549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.388 [2024-07-26 11:32:19.885567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.388 [2024-07-26 11:32:19.885575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.388 [2024-07-26 11:32:19.885601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.388 [2024-07-26 11:32:19.885631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.388 [2024-07-26 11:32:19.885654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.388 [2024-07-26 11:32:19.885819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.388 [2024-07-26 11:32:19.885836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.388 [2024-07-26 11:32:19.885844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.388 [2024-07-26 11:32:19.885869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.885887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.388 [2024-07-26 11:32:19.885902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.388 [2024-07-26 11:32:19.885926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.388 [2024-07-26 11:32:19.886093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.388 [2024-07-26 11:32:19.886110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.388 [2024-07-26 11:32:19.886117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.886125] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.388 [2024-07-26 11:32:19.886143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.886153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.886161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.388 [2024-07-26 11:32:19.886172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.388 [2024-07-26 11:32:19.886195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.388 [2024-07-26 11:32:19.886341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.388 [2024-07-26 11:32:19.886355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.388 [2024-07-26 11:32:19.886362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.886370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.388 [2024-07-26 11:32:19.886387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.886397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.886405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cd540) 00:24:24.388 [2024-07-26 11:32:19.886417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.388 [2024-07-26 11:32:19.890451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x122d840, cid 3, qid 0 00:24:24.388 [2024-07-26 11:32:19.890613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:24.388 [2024-07-26 11:32:19.890630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:24.388 [2024-07-26 11:32:19.890638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:24.388 [2024-07-26 11:32:19.890646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x122d840) on tqpair=0x11cd540 00:24:24.388 [2024-07-26 11:32:19.890661] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:24.388 0% 00:24:24.388 Data Units Read: 0 00:24:24.388 Data Units Written: 0 00:24:24.388 Host Read Commands: 0 00:24:24.388 Host Write Commands: 0 00:24:24.388 Controller Busy Time: 0 minutes 00:24:24.388 Power Cycles: 0 00:24:24.388 Power On Hours: 0 hours 00:24:24.388 Unsafe Shutdowns: 0 00:24:24.388 Unrecoverable Media Errors: 0 00:24:24.388 Lifetime Error Log Entries: 0 00:24:24.388 Warning Temperature Time: 0 minutes 00:24:24.388 Critical Temperature Time: 0 minutes 00:24:24.388 00:24:24.388 Number of Queues 00:24:24.388 ================ 00:24:24.388 Number of I/O Submission Queues: 127 00:24:24.388 Number of I/O Completion Queues: 127 00:24:24.388 00:24:24.388 Active Namespaces 00:24:24.388 ================= 00:24:24.388 Namespace ID:1 00:24:24.388 Error Recovery Timeout: Unlimited 00:24:24.388 Command Set Identifier: NVM (00h) 00:24:24.388 Deallocate: Supported 00:24:24.388 Deallocated/Unwritten Error: Not Supported 00:24:24.388 Deallocated Read Value: Unknown 00:24:24.388 Deallocate in Write Zeroes: Not Supported 00:24:24.388 Deallocated Guard Field: 0xFFFF 00:24:24.388 Flush: Supported 00:24:24.388 Reservation: Supported 00:24:24.388 Namespace Sharing Capabilities: Multiple Controllers 00:24:24.388 Size (in LBAs): 131072 (0GiB) 00:24:24.388 Capacity (in LBAs): 131072 (0GiB) 00:24:24.388 Utilization (in LBAs): 131072 (0GiB) 00:24:24.388 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:24.388 EUI64: ABCDEF0123456789 00:24:24.388 UUID: f66c5b40-eeb1-4326-a4db-c624a641299b 00:24:24.388 Thin Provisioning: Not Supported 00:24:24.388 Per-NS Atomic Units: Yes 00:24:24.388 Atomic Boundary Size (Normal): 0 00:24:24.388 Atomic Boundary Size (PFail): 0 00:24:24.388 Atomic Boundary Offset: 0 00:24:24.388 Maximum Single Source Range Length: 65535 00:24:24.388 Maximum Copy Length: 65535 00:24:24.388 Maximum Source Range Count: 1 00:24:24.388 NGUID/EUI64 Never Reused: No 00:24:24.388 Namespace Write Protected: No 00:24:24.388 Number of LBA Formats: 1 00:24:24.388 Current LBA Format: LBA Format #00 00:24:24.388 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:24.388 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:24.388 rmmod nvme_tcp 00:24:24.388 rmmod nvme_fabrics 00:24:24.388 rmmod nvme_keyring 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2179732 ']' 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2179732 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2179732 ']' 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2179732 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.388 11:32:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2179732 00:24:24.389 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.389 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.389 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2179732' 00:24:24.389 killing process with pid 2179732 00:24:24.389 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2179732 00:24:24.389 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2179732 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.956 11:32:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.860 00:24:26.860 real 0m6.320s 00:24:26.860 user 0m5.587s 00:24:26.860 sys 0m2.473s 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:26.860 ************************************ 00:24:26.860 END TEST nvmf_identify 00:24:26.860 ************************************ 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.860 ************************************ 00:24:26.860 START TEST nvmf_perf 00:24:26.860 ************************************ 00:24:26.860 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:27.119 * Looking for test storage... 00:24:27.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.119 11:32:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:29.652 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.652 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:29.652 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:29.652 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:29.652 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:29.653 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:29.653 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:29.653 Found net devices under 0000:84:00.0: cvl_0_0 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:29.653 Found net devices under 0000:84:00.1: cvl_0_1 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.653 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:29.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:24:29.912 00:24:29.912 --- 10.0.0.2 ping statistics --- 00:24:29.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.912 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:24:29.912 00:24:29.912 --- 10.0.0.1 ping statistics --- 00:24:29.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.912 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2181944 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2181944 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2181944 ']' 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.912 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:29.912 [2024-07-26 11:32:25.453099] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:29.912 [2024-07-26 11:32:25.453284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.912 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.912 [2024-07-26 11:32:25.563846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.170 [2024-07-26 11:32:25.700481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.170 [2024-07-26 11:32:25.700541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.170 [2024-07-26 11:32:25.700558] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.170 [2024-07-26 11:32:25.700571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.170 [2024-07-26 11:32:25.700582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.170 [2024-07-26 11:32:25.700662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.170 [2024-07-26 11:32:25.702466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.170 [2024-07-26 11:32:25.702518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.170 [2024-07-26 11:32:25.702523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:30.428 11:32:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:33.706 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:33.706 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:33.963 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:24:33.963 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:34.222 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:34.222 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:24:34.222 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:34.222 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:34.222 11:32:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.786 [2024-07-26 11:32:30.200090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.786 11:32:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.048 11:32:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:35.048 11:32:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.628 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:35.628 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:36.195 11:32:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.453 [2024-07-26 11:32:32.083306] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.453 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:37.387 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:24:37.387 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:24:37.387 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:37.387 11:32:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:24:38.320 Initializing NVMe Controllers 00:24:38.320 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:24:38.320 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:24:38.320 Initialization complete. Launching workers. 00:24:38.320 ======================================================== 00:24:38.320 Latency(us) 00:24:38.320 Device Information : IOPS MiB/s Average min max 00:24:38.320 PCIE (0000:82:00.0) NSID 1 from core 0: 73918.29 288.74 432.32 38.79 4392.96 00:24:38.320 ======================================================== 00:24:38.320 Total : 73918.29 288.74 432.32 38.79 4392.96 00:24:38.320 00:24:38.578 11:32:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.578 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.952 Initializing NVMe Controllers 00:24:39.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:39.952 Initialization complete. Launching workers. 00:24:39.952 ======================================================== 00:24:39.952 Latency(us) 00:24:39.952 Device Information : IOPS MiB/s Average min max 00:24:39.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.00 0.30 13665.72 215.39 44792.66 00:24:39.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.00 0.23 17041.93 7935.35 55888.46 00:24:39.952 ======================================================== 00:24:39.952 Total : 135.00 0.53 15141.25 215.39 55888.46 00:24:39.952 00:24:39.952 11:32:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.952 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.887 Initializing NVMe Controllers 00:24:40.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.887 Initialization complete. Launching workers. 00:24:40.887 ======================================================== 00:24:40.887 Latency(us) 00:24:40.887 Device Information : IOPS MiB/s Average min max 00:24:40.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7680.99 30.00 4174.99 472.08 7998.12 00:24:40.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3871.00 15.12 8315.00 6030.37 15865.50 00:24:40.887 ======================================================== 00:24:40.887 Total : 11551.99 45.12 5562.28 472.08 15865.50 00:24:40.887 00:24:41.145 11:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:41.145 11:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:41.145 11:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.145 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.676 Initializing NVMe Controllers 00:24:43.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.676 Controller IO queue size 128, less than required. 00:24:43.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.676 Controller IO queue size 128, less than required. 00:24:43.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.676 Initialization complete. Launching workers. 00:24:43.676 ======================================================== 00:24:43.676 Latency(us) 00:24:43.676 Device Information : IOPS MiB/s Average min max 00:24:43.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1007.78 251.94 130812.06 88664.76 231218.65 00:24:43.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.37 151.34 216807.29 61862.52 357699.11 00:24:43.676 ======================================================== 00:24:43.676 Total : 1613.14 403.29 163083.59 61862.52 357699.11 00:24:43.676 00:24:43.676 11:32:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:43.676 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.676 No valid NVMe controllers or AIO or URING devices found 00:24:43.676 Initializing NVMe Controllers 00:24:43.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.676 Controller IO queue size 128, less than required. 00:24:43.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.676 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:43.676 Controller IO queue size 128, less than required. 00:24:43.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.676 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:43.676 WARNING: Some requested NVMe devices were skipped 00:24:43.676 11:32:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:43.677 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.207 Initializing NVMe Controllers 00:24:46.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:46.207 Controller IO queue size 128, less than required. 00:24:46.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:46.207 Controller IO queue size 128, less than required. 00:24:46.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:46.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:46.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:46.207 Initialization complete. Launching workers. 00:24:46.207 00:24:46.207 ==================== 00:24:46.208 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:46.208 TCP transport: 00:24:46.208 polls: 16139 00:24:46.208 idle_polls: 6025 00:24:46.208 sock_completions: 10114 00:24:46.208 nvme_completions: 4285 00:24:46.208 submitted_requests: 6466 00:24:46.208 queued_requests: 1 00:24:46.208 00:24:46.208 ==================== 00:24:46.208 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:46.208 TCP transport: 00:24:46.208 polls: 15605 00:24:46.208 idle_polls: 6509 00:24:46.208 sock_completions: 9096 00:24:46.208 nvme_completions: 4221 00:24:46.208 submitted_requests: 6376 00:24:46.208 queued_requests: 1 00:24:46.208 ======================================================== 00:24:46.208 Latency(us) 00:24:46.208 Device Information : IOPS MiB/s Average min max 00:24:46.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1070.90 267.73 124815.06 68144.21 185038.98 00:24:46.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1054.90 263.73 122316.49 56382.74 184321.44 00:24:46.208 ======================================================== 00:24:46.208 Total : 2125.81 531.45 123575.18 56382.74 185038.98 00:24:46.208 00:24:46.208 11:32:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:46.208 11:32:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.466 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.466 rmmod nvme_tcp 00:24:46.466 rmmod nvme_fabrics 00:24:46.466 rmmod nvme_keyring 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2181944 ']' 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2181944 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2181944 ']' 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2181944 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2181944 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2181944' 00:24:46.725 killing process with pid 2181944 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2181944 00:24:46.725 11:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2181944 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.626 11:32:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:50.532 00:24:50.532 real 0m23.437s 00:24:50.532 user 1m13.425s 00:24:50.532 sys 0m5.977s 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.532 ************************************ 00:24:50.532 END TEST nvmf_perf 00:24:50.532 ************************************ 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.532 ************************************ 00:24:50.532 START TEST nvmf_fio_host 00:24:50.532 ************************************ 00:24:50.532 11:32:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:50.532 * Looking for test storage... 00:24:50.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.532 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.533 11:32:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:53.069 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:53.069 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.069 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:53.070 Found net devices under 0000:84:00.0: cvl_0_0 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:53.070 Found net devices under 0000:84:00.1: cvl_0_1 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:53.070 00:24:53.070 --- 10.0.0.2 ping statistics --- 00:24:53.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.070 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:24:53.070 00:24:53.070 --- 10.0.0.1 ping statistics --- 00:24:53.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.070 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.070 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2186054 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2186054 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2186054 ']' 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.327 11:32:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.327 [2024-07-26 11:32:48.812993] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:24:53.327 [2024-07-26 11:32:48.813099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.327 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.327 [2024-07-26 11:32:48.897799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.583 [2024-07-26 11:32:49.021204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.583 [2024-07-26 11:32:49.021264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.583 [2024-07-26 11:32:49.021281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.583 [2024-07-26 11:32:49.021294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.583 [2024-07-26 11:32:49.021306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.583 [2024-07-26 11:32:49.021385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.583 [2024-07-26 11:32:49.021451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.583 [2024-07-26 11:32:49.021505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.583 [2024-07-26 11:32:49.021508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.583 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.583 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:53.584 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.841 [2024-07-26 11:32:49.473121] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.841 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:53.841 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:53.841 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.098 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:54.356 Malloc1 00:24:54.356 11:32:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.922 11:32:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:55.186 11:32:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.443 [2024-07-26 11:32:51.088079] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.700 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:55.958 11:32:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:56.216 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:56.216 fio-3.35 00:24:56.216 Starting 1 thread 00:24:56.216 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.743 00:24:58.743 test: (groupid=0, jobs=1): err= 0: pid=2186533: Fri Jul 26 11:32:53 2024 00:24:58.743 read: IOPS=8154, BW=31.9MiB/s (33.4MB/s)(63.9MiB/2006msec) 00:24:58.743 slat (usec): min=2, max=271, avg= 4.45, stdev= 3.39 00:24:58.743 clat (usec): min=2990, max=15621, avg=8623.94, stdev=625.76 00:24:58.743 lat (usec): min=3019, max=15624, avg=8628.39, stdev=625.64 00:24:58.743 clat percentiles (usec): 00:24:58.743 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 8160], 00:24:58.743 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:24:58.743 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9503], 00:24:58.743 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[12649], 99.95th=[13829], 00:24:58.743 | 99.99th=[15533] 00:24:58.743 bw ( KiB/s): min=31592, max=33304, per=99.85%, avg=32568.00, stdev=750.32, samples=4 00:24:58.743 iops : min= 7898, max= 8324, avg=8142.00, stdev=187.30, samples=4 00:24:58.743 write: IOPS=8150, BW=31.8MiB/s (33.4MB/s)(63.9MiB/2006msec); 0 zone resets 00:24:58.743 slat (usec): min=2, max=130, avg= 4.62, stdev= 2.76 00:24:58.743 clat (usec): min=1724, max=13593, avg=7025.58, stdev=564.56 00:24:58.743 lat (usec): min=1733, max=13597, avg=7030.19, stdev=564.61 00:24:58.743 clat percentiles (usec): 00:24:58.743 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:24:58.743 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:24:58.743 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7832], 00:24:58.743 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[10945], 99.95th=[11731], 00:24:58.743 | 99.99th=[13566] 00:24:58.743 bw ( KiB/s): min=32392, max=32720, per=99.97%, avg=32594.00, stdev=152.68, samples=4 00:24:58.743 iops : min= 8098, max= 8180, avg=8148.50, stdev=38.17, samples=4 00:24:58.743 lat (msec) : 2=0.01%, 4=0.10%, 10=99.41%, 20=0.49% 00:24:58.743 cpu : usr=69.38%, sys=26.88%, ctx=34, majf=0, minf=39 00:24:58.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:58.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:58.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:58.743 issued rwts: total=16358,16350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:58.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:58.743 00:24:58.743 Run status group 0 (all jobs): 00:24:58.743 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:24:58.743 WRITE: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:24:58.743 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:58.744 11:32:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:58.744 11:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:58.744 11:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:58.744 11:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.744 11:32:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.744 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:58.744 fio-3.35 00:24:58.744 Starting 1 thread 00:24:58.744 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.274 00:25:01.274 test: (groupid=0, jobs=1): err= 0: pid=2186869: Fri Jul 26 11:32:56 2024 00:25:01.274 read: IOPS=6766, BW=106MiB/s (111MB/s)(212MiB/2006msec) 00:25:01.274 slat (usec): min=3, max=211, avg= 5.37, stdev= 3.44 00:25:01.274 clat (usec): min=2835, max=27084, avg=11122.28, stdev=3372.59 00:25:01.274 lat (usec): min=2839, max=27089, avg=11127.65, stdev=3373.40 00:25:01.274 clat percentiles (usec): 00:25:01.274 | 1.00th=[ 5342], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 8356], 00:25:01.274 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:25:01.274 | 70.00th=[12256], 80.00th=[13304], 90.00th=[15533], 95.00th=[17433], 00:25:01.274 | 99.00th=[22152], 99.50th=[24249], 99.90th=[26870], 99.95th=[27132], 00:25:01.274 | 99.99th=[27132] 00:25:01.274 bw ( KiB/s): min=45792, max=63232, per=49.96%, avg=54088.00, stdev=7360.85, samples=4 00:25:01.274 iops : min= 2862, max= 3952, avg=3380.50, stdev=460.05, samples=4 00:25:01.274 write: IOPS=3833, BW=59.9MiB/s (62.8MB/s)(111MiB/1852msec); 0 zone resets 00:25:01.274 slat (usec): min=39, max=428, avg=50.98, stdev=19.83 00:25:01.274 clat (usec): min=4721, max=28153, avg=13851.03, stdev=3450.80 00:25:01.274 lat (usec): min=4814, max=28256, avg=13902.01, stdev=3460.88 00:25:01.274 clat percentiles (usec): 00:25:01.274 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10945], 00:25:01.274 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13304], 60.00th=[14222], 00:25:01.274 | 70.00th=[15270], 80.00th=[16319], 90.00th=[18482], 95.00th=[20841], 00:25:01.274 | 99.00th=[23987], 99.50th=[25560], 99.90th=[27657], 99.95th=[27919], 00:25:01.274 | 99.99th=[28181] 00:25:01.274 bw ( KiB/s): min=48608, max=65600, per=91.80%, avg=56312.00, stdev=7365.07, samples=4 00:25:01.274 iops : min= 3038, max= 4100, avg=3519.50, stdev=460.32, samples=4 00:25:01.274 lat (msec) : 4=0.10%, 10=28.46%, 20=68.17%, 50=3.27% 00:25:01.274 cpu : usr=80.45%, sys=16.56%, ctx=24, majf=0, minf=57 00:25:01.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:01.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:01.274 issued rwts: total=13574,7100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:01.274 00:25:01.274 Run status group 0 (all jobs): 00:25:01.274 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=212MiB (222MB), run=2006-2006msec 00:25:01.274 WRITE: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=111MiB (116MB), run=1852-1852msec 00:25:01.274 11:32:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.532 rmmod nvme_tcp 00:25:01.532 rmmod nvme_fabrics 00:25:01.532 rmmod nvme_keyring 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2186054 ']' 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2186054 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2186054 ']' 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2186054 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2186054 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2186054' 00:25:01.532 killing process with pid 2186054 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2186054 00:25:01.532 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2186054 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.791 11:32:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:04.327 00:25:04.327 real 0m13.536s 00:25:04.327 user 0m40.095s 00:25:04.327 sys 0m4.321s 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.327 ************************************ 00:25:04.327 END TEST nvmf_fio_host 00:25:04.327 ************************************ 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.327 ************************************ 00:25:04.327 START TEST nvmf_failover 00:25:04.327 ************************************ 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:04.327 * Looking for test storage... 00:25:04.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.327 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:04.328 11:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:06.863 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:06.863 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:06.863 Found net devices under 0000:84:00.0: cvl_0_0 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.863 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:06.864 Found net devices under 0000:84:00.1: cvl_0_1 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:25:06.864 00:25:06.864 --- 10.0.0.2 ping statistics --- 00:25:06.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.864 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:25:06.864 00:25:06.864 --- 10.0.0.1 ping statistics --- 00:25:06.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.864 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2189300 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2189300 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2189300 ']' 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.864 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.864 [2024-07-26 11:33:02.483811] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:25:06.864 [2024-07-26 11:33:02.483914] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.123 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.123 [2024-07-26 11:33:02.574716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:07.123 [2024-07-26 11:33:02.714800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.123 [2024-07-26 11:33:02.714871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.123 [2024-07-26 11:33:02.714891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.123 [2024-07-26 11:33:02.714908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.123 [2024-07-26 11:33:02.714922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.123 [2024-07-26 11:33:02.715050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.123 [2024-07-26 11:33:02.715136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.123 [2024-07-26 11:33:02.715141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.382 11:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:07.640 [2024-07-26 11:33:03.196744] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.640 11:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:08.206 Malloc0 00:25:08.206 11:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:08.465 11:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:09.031 11:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.290 [2024-07-26 11:33:04.767232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.290 11:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:09.548 [2024-07-26 11:33:05.112394] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:09.548 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:09.806 [2024-07-26 11:33:05.457632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2189623 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2189623 /var/tmp/bdevperf.sock 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2189623 ']' 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:10.065 11:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:10.631 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.631 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:10.631 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.889 NVMe0n1 00:25:10.889 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.454 00:25:11.454 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2189868 00:25:11.454 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.454 11:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:12.388 11:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.647 11:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:15.950 11:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:16.207 00:25:16.207 11:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.466 [2024-07-26 11:33:12.084766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.084999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 [2024-07-26 11:33:12.085239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f21f0 is same with the state(5) to be set 00:25:16.466 11:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:19.762 11:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.020 [2024-07-26 11:33:15.478090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.020 11:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:20.954 11:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:21.213 11:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2189868 00:25:27.779 0 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2189623 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2189623 ']' 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2189623 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2189623 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2189623' 00:25:27.779 killing process with pid 2189623 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2189623 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2189623 00:25:27.779 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.779 [2024-07-26 11:33:05.549813] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:25:27.779 [2024-07-26 11:33:05.549929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189623 ] 00:25:27.779 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.779 [2024-07-26 11:33:05.637758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.779 [2024-07-26 11:33:05.763649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.779 Running I/O for 15 seconds... 00:25:27.780 [2024-07-26 11:33:08.248948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.249016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.249069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.249104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.249144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.249953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.249971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.249987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.250021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.250054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.250088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.780 [2024-07-26 11:33:08.250122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.780 [2024-07-26 11:33:08.250351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.780 [2024-07-26 11:33:08.250367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.781 [2024-07-26 11:33:08.250674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.250979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.250995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.781 [2024-07-26 11:33:08.251632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.781 [2024-07-26 11:33:08.251648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.251977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.251994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.782 [2024-07-26 11:33:08.252868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.782 [2024-07-26 11:33:08.252902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.782 [2024-07-26 11:33:08.252936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.782 [2024-07-26 11:33:08.252954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.782 [2024-07-26 11:33:08.252970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.252988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.783 [2024-07-26 11:33:08.253004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.783 [2024-07-26 11:33:08.253038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.783 [2024-07-26 11:33:08.253071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.783 [2024-07-26 11:33:08.253105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.783 [2024-07-26 11:33:08.253138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:08.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2087ba0 is same with the state(5) to be set 00:25:27.783 [2024-07-26 11:33:08.253413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.783 [2024-07-26 11:33:08.253434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.783 [2024-07-26 11:33:08.253450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73104 len:8 PRP1 0x0 PRP2 0x0 00:25:27.783 [2024-07-26 11:33:08.253465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253534] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2087ba0 was disconnected and freed. reset controller. 00:25:27.783 [2024-07-26 11:33:08.253557] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:27.783 [2024-07-26 11:33:08.253598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.783 [2024-07-26 11:33:08.253618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.783 [2024-07-26 11:33:08.253662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.783 [2024-07-26 11:33:08.253693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.783 [2024-07-26 11:33:08.253723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:08.253744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.783 [2024-07-26 11:33:08.257390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.783 [2024-07-26 11:33:08.257440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2061790 (9): Bad file descriptor 00:25:27.783 [2024-07-26 11:33:08.297874] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.783 [2024-07-26 11:33:12.086921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.086971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.783 [2024-07-26 11:33:12.087529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.783 [2024-07-26 11:33:12.087545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.087980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.087997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.784 [2024-07-26 11:33:12.088013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.784 [2024-07-26 11:33:12.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.784 [2024-07-26 11:33:12.088519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.088968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.088984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.785 [2024-07-26 11:33:12.089631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.785 [2024-07-26 11:33:12.089696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63840 len:8 PRP1 0x0 PRP2 0x0 00:25:27.785 [2024-07-26 11:33:12.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.785 [2024-07-26 11:33:12.089746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.785 [2024-07-26 11:33:12.089759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63848 len:8 PRP1 0x0 PRP2 0x0 00:25:27.785 [2024-07-26 11:33:12.089774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.785 [2024-07-26 11:33:12.089801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.785 [2024-07-26 11:33:12.089813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63856 len:8 PRP1 0x0 PRP2 0x0 00:25:27.785 [2024-07-26 11:33:12.089828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.785 [2024-07-26 11:33:12.089843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.785 [2024-07-26 11:33:12.089855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.785 [2024-07-26 11:33:12.089867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63864 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.089882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.089896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.089908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.089921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63872 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.089935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.089950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.089962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.089975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.089989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63888 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63896 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63904 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63912 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63928 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63936 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63944 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63952 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63960 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63968 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63976 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63984 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63992 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64000 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64008 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64016 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.090951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.090963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64024 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.090978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.090993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.091005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.091018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64032 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.091032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.091047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.091059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.091072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.091086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.786 [2024-07-26 11:33:12.091102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.786 [2024-07-26 11:33:12.091114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.786 [2024-07-26 11:33:12.091126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64048 len:8 PRP1 0x0 PRP2 0x0 00:25:27.786 [2024-07-26 11:33:12.091141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64056 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64064 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64080 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64088 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64096 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64112 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64120 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.091956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.091968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.091980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64168 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.091995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64184 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64200 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64208 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63440 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.787 [2024-07-26 11:33:12.092345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.787 [2024-07-26 11:33:12.092357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63448 len:8 PRP1 0x0 PRP2 0x0 00:25:27.787 [2024-07-26 11:33:12.092371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.787 [2024-07-26 11:33:12.092445] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2087d80 was disconnected and freed. reset controller. 00:25:27.787 [2024-07-26 11:33:12.092468] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:27.787 [2024-07-26 11:33:12.092507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.787 [2024-07-26 11:33:12.092527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:12.092544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.788 [2024-07-26 11:33:12.092558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:12.092573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.788 [2024-07-26 11:33:12.092588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:12.092604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.788 [2024-07-26 11:33:12.092618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:12.092633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.788 [2024-07-26 11:33:12.092692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2061790 (9): Bad file descriptor 00:25:27.788 [2024-07-26 11:33:12.096261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.788 [2024-07-26 11:33:12.129868] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.788 [2024-07-26 11:33:16.779455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.779979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.779995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.788 [2024-07-26 11:33:16.780261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.788 [2024-07-26 11:33:16.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-07-26 11:33:16.780708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.789 [2024-07-26 11:33:16.780807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.780978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.780995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.789 [2024-07-26 11:33:16.781853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.789 [2024-07-26 11:33:16.781871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.781886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.781904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.781919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.781937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.781952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.781970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.781985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.782982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.782997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.783015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.783031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.783048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.783081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.783096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.783113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.783129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.783147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.790 [2024-07-26 11:33:16.783162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.790 [2024-07-26 11:33:16.783179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.791 [2024-07-26 11:33:16.783260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.791 [2024-07-26 11:33:16.783293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.791 [2024-07-26 11:33:16.783796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2087d80 is same with the state(5) to be set 00:25:27.791 [2024-07-26 11:33:16.783831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.791 [2024-07-26 11:33:16.783844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.791 [2024-07-26 11:33:16.783857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106592 len:8 PRP1 0x0 PRP2 0x0 00:25:27.791 [2024-07-26 11:33:16.783871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.783936] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2087d80 was disconnected and freed. reset controller. 00:25:27.791 [2024-07-26 11:33:16.783956] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:27.791 [2024-07-26 11:33:16.783996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.791 [2024-07-26 11:33:16.784015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.784032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.791 [2024-07-26 11:33:16.784047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.784062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.791 [2024-07-26 11:33:16.784077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.784092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.791 [2024-07-26 11:33:16.784106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.791 [2024-07-26 11:33:16.784122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.791 [2024-07-26 11:33:16.787741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.791 [2024-07-26 11:33:16.787783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2061790 (9): Bad file descriptor 00:25:27.791 [2024-07-26 11:33:16.866901] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.791 00:25:27.791 Latency(us) 00:25:27.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.791 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.791 Verification LBA range: start 0x0 length 0x4000 00:25:27.791 NVMe0n1 : 15.00 7900.80 30.86 345.32 0.00 15492.72 831.34 18641.35 00:25:27.791 =================================================================================================================== 00:25:27.791 Total : 7900.80 30.86 345.32 0.00 15492.72 831.34 18641.35 00:25:27.791 Received shutdown signal, test time was about 15.000000 seconds 00:25:27.791 00:25:27.791 Latency(us) 00:25:27.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.791 =================================================================================================================== 00:25:27.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2192094 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2192094 /var/tmp/bdevperf.sock 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2192094 ']' 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:27.791 11:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:27.791 [2024-07-26 11:33:23.141520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:27.791 11:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:28.048 [2024-07-26 11:33:23.494535] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:28.048 11:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.305 NVMe0n1 00:25:28.305 11:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.869 00:25:28.870 11:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:29.432 00:25:29.432 11:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.432 11:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:29.688 11:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:29.946 11:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:33.226 11:33:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:33.226 11:33:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:33.483 11:33:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2192881 00:25:33.483 11:33:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:33.483 11:33:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2192881 00:25:34.857 0 00:25:34.857 11:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:34.857 [2024-07-26 11:33:22.501009] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:25:34.857 [2024-07-26 11:33:22.501098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192094 ] 00:25:34.857 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.857 [2024-07-26 11:33:22.565107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.857 [2024-07-26 11:33:22.683063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.857 [2024-07-26 11:33:25.450895] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:34.857 [2024-07-26 11:33:25.450977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.857 [2024-07-26 11:33:25.451002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.857 [2024-07-26 11:33:25.451021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.857 [2024-07-26 11:33:25.451036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.857 [2024-07-26 11:33:25.451051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.857 [2024-07-26 11:33:25.451066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.857 [2024-07-26 11:33:25.451082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.857 [2024-07-26 11:33:25.451097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.857 [2024-07-26 11:33:25.451112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.857 [2024-07-26 11:33:25.451161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.857 [2024-07-26 11:33:25.451196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c0790 (9): Bad file descriptor 00:25:34.857 [2024-07-26 11:33:25.456238] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.857 Running I/O for 1 seconds... 00:25:34.857 00:25:34.857 Latency(us) 00:25:34.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.857 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:34.857 Verification LBA range: start 0x0 length 0x4000 00:25:34.857 NVMe0n1 : 1.02 8051.39 31.45 0.00 0.00 15830.48 3495.25 13883.92 00:25:34.857 =================================================================================================================== 00:25:34.857 Total : 8051.39 31.45 0.00 0.00 15830.48 3495.25 13883.92 00:25:34.857 11:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.857 11:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:34.857 11:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.449 11:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.449 11:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:35.729 11:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.308 11:33:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:39.588 11:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.588 11:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2192094 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2192094 ']' 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2192094 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2192094 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2192094' 00:25:39.588 killing process with pid 2192094 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2192094 00:25:39.588 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2192094 00:25:39.847 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:39.847 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.105 rmmod nvme_tcp 00:25:40.105 rmmod nvme_fabrics 00:25:40.105 rmmod nvme_keyring 00:25:40.105 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2189300 ']' 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2189300 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2189300 ']' 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2189300 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2189300 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2189300' 00:25:40.364 killing process with pid 2189300 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2189300 00:25:40.364 11:33:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2189300 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.623 11:33:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.530 11:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.530 00:25:42.530 real 0m38.632s 00:25:42.530 user 2m16.703s 00:25:42.530 sys 0m6.984s 00:25:42.530 11:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.530 11:33:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:42.530 ************************************ 00:25:42.530 END TEST nvmf_failover 00:25:42.530 ************************************ 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.790 ************************************ 00:25:42.790 START TEST nvmf_host_discovery 00:25:42.790 ************************************ 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:42.790 * Looking for test storage... 00:25:42.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:42.790 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.791 11:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:45.328 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:45.328 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.328 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:45.329 Found net devices under 0000:84:00.0: cvl_0_0 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:45.329 Found net devices under 0000:84:00.1: cvl_0_1 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.329 11:33:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.587 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.587 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.587 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:25:45.587 00:25:45.587 --- 10.0.0.2 ping statistics --- 00:25:45.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.587 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:45.587 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:25:45.587 00:25:45.587 --- 10.0.0.1 ping statistics --- 00:25:45.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.587 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:25:45.587 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.587 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2195631 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2195631 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2195631 ']' 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:45.588 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.588 [2024-07-26 11:33:41.144102] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:25:45.588 [2024-07-26 11:33:41.144202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.588 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.588 [2024-07-26 11:33:41.226873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.846 [2024-07-26 11:33:41.348229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.846 [2024-07-26 11:33:41.348294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.846 [2024-07-26 11:33:41.348311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.846 [2024-07-26 11:33:41.348325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.846 [2024-07-26 11:33:41.348346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.846 [2024-07-26 11:33:41.348378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.846 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.847 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.847 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.847 [2024-07-26 11:33:41.504746] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.105 [2024-07-26 11:33:41.512980] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.105 null0 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.105 null1 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2195772 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2195772 /tmp/host.sock 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2195772 ']' 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:46.105 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:46.105 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.105 [2024-07-26 11:33:41.600641] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:25:46.105 [2024-07-26 11:33:41.600745] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195772 ] 00:25:46.105 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.105 [2024-07-26 11:33:41.675030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.364 [2024-07-26 11:33:41.796997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.364 11:33:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.364 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.622 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.623 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 [2024-07-26 11:33:42.347155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:46.881 11:33:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:47.447 [2024-07-26 11:33:42.984650] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:47.447 [2024-07-26 11:33:42.984679] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:47.447 [2024-07-26 11:33:42.984704] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:47.447 [2024-07-26 11:33:43.070980] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:47.705 [2024-07-26 11:33:43.134697] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:47.705 [2024-07-26 11:33:43.134725] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:47.963 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.221 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.478 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.478 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.478 11:33:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:49.412 11:33:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.412 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.672 [2024-07-26 11:33:45.127224] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:49.672 [2024-07-26 11:33:45.128612] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:49.672 [2024-07-26 11:33:45.128656] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:49.672 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.673 [2024-07-26 11:33:45.214855] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:49.673 11:33:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:49.932 [2024-07-26 11:33:45.484213] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:49.932 [2024-07-26 11:33:45.484248] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:49.933 [2024-07-26 11:33:45.484260] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:50.868 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:50.868 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:50.868 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:50.868 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:50.868 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.869 [2024-07-26 11:33:46.459199] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:50.869 [2024-07-26 11:33:46.459237] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:50.869 [2024-07-26 11:33:46.460685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.869 [2024-07-26 11:33:46.460719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.869 [2024-07-26 11:33:46.460749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.869 [2024-07-26 11:33:46.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.869 [2024-07-26 11:33:46.460779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.869 [2024-07-26 11:33:46.460793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.869 [2024-07-26 11:33:46.460809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.869 [2024-07-26 11:33:46.460824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.869 [2024-07-26 11:33:46.460839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.869 [2024-07-26 11:33:46.470677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.869 [2024-07-26 11:33:46.480725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.869 [2024-07-26 11:33:46.481091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.869 [2024-07-26 11:33:46.481137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:50.869 [2024-07-26 11:33:46.481155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:50.869 [2024-07-26 11:33:46.481182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:50.869 [2024-07-26 11:33:46.481207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.869 [2024-07-26 11:33:46.481222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.869 [2024-07-26 11:33:46.481239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.869 [2024-07-26 11:33:46.481262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.869 [2024-07-26 11:33:46.490811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.869 [2024-07-26 11:33:46.491208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.869 [2024-07-26 11:33:46.491252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:50.869 [2024-07-26 11:33:46.491273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:50.869 [2024-07-26 11:33:46.491300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:50.869 [2024-07-26 11:33:46.491342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.869 [2024-07-26 11:33:46.491363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.869 [2024-07-26 11:33:46.491377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.869 [2024-07-26 11:33:46.491399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.869 [2024-07-26 11:33:46.500891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.869 [2024-07-26 11:33:46.501186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.869 [2024-07-26 11:33:46.501235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:50.869 [2024-07-26 11:33:46.501254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:50.869 [2024-07-26 11:33:46.501278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:50.869 [2024-07-26 11:33:46.501301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.869 [2024-07-26 11:33:46.501317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.869 [2024-07-26 11:33:46.501331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.869 [2024-07-26 11:33:46.501351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:50.869 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:50.870 [2024-07-26 11:33:46.510972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.870 [2024-07-26 11:33:46.511204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.870 [2024-07-26 11:33:46.511253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:50.870 [2024-07-26 11:33:46.511272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:50.870 [2024-07-26 11:33:46.511297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:50.870 [2024-07-26 11:33:46.511320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.870 [2024-07-26 11:33:46.511335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.870 [2024-07-26 11:33:46.511350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.870 [2024-07-26 11:33:46.511370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.870 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.870 [2024-07-26 11:33:46.521054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.870 [2024-07-26 11:33:46.521325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.870 [2024-07-26 11:33:46.521375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:50.870 [2024-07-26 11:33:46.521393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:50.870 [2024-07-26 11:33:46.521418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:50.870 [2024-07-26 11:33:46.521451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.870 [2024-07-26 11:33:46.521468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.870 [2024-07-26 11:33:46.521483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.870 [2024-07-26 11:33:46.521504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 [2024-07-26 11:33:46.531137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:51.129 [2024-07-26 11:33:46.531358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.129 [2024-07-26 11:33:46.531407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:51.129 [2024-07-26 11:33:46.531438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:51.129 [2024-07-26 11:33:46.531466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:51.129 [2024-07-26 11:33:46.531489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.129 [2024-07-26 11:33:46.531503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.129 [2024-07-26 11:33:46.531518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.129 [2024-07-26 11:33:46.531538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.129 [2024-07-26 11:33:46.541213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:51.129 [2024-07-26 11:33:46.541461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.129 [2024-07-26 11:33:46.541493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:51.129 [2024-07-26 11:33:46.541510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:51.129 [2024-07-26 11:33:46.541534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:51.129 [2024-07-26 11:33:46.541557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.129 [2024-07-26 11:33:46.541572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.129 [2024-07-26 11:33:46.541586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.129 [2024-07-26 11:33:46.541606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 [2024-07-26 11:33:46.551290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:51.129 [2024-07-26 11:33:46.551505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.129 [2024-07-26 11:33:46.551536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:51.129 [2024-07-26 11:33:46.551555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:51.129 [2024-07-26 11:33:46.551579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:51.129 [2024-07-26 11:33:46.551602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.129 [2024-07-26 11:33:46.551617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.129 [2024-07-26 11:33:46.551631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.129 [2024-07-26 11:33:46.551652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:51.129 [2024-07-26 11:33:46.561369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:51.129 [2024-07-26 11:33:46.561597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.129 [2024-07-26 11:33:46.561629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:51.129 [2024-07-26 11:33:46.561647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:51.129 [2024-07-26 11:33:46.561672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:51.129 [2024-07-26 11:33:46.561694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.129 [2024-07-26 11:33:46.561709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.129 [2024-07-26 11:33:46.561723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.129 [2024-07-26 11:33:46.561744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.129 [2024-07-26 11:33:46.571451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:51.129 [2024-07-26 11:33:46.571685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.129 [2024-07-26 11:33:46.571716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:51.129 [2024-07-26 11:33:46.571734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:51.129 [2024-07-26 11:33:46.571758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:51.129 [2024-07-26 11:33:46.571781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.129 [2024-07-26 11:33:46.571796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.129 [2024-07-26 11:33:46.571810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.129 [2024-07-26 11:33:46.571831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 [2024-07-26 11:33:46.581529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:51.129 [2024-07-26 11:33:46.581767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.129 [2024-07-26 11:33:46.581797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d7230 with addr=10.0.0.2, port=4420 00:25:51.129 [2024-07-26 11:33:46.581815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d7230 is same with the state(5) to be set 00:25:51.129 [2024-07-26 11:33:46.581839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d7230 (9): Bad file descriptor 00:25:51.129 [2024-07-26 11:33:46.581869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.129 [2024-07-26 11:33:46.581885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.129 [2024-07-26 11:33:46.581899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.129 [2024-07-26 11:33:46.581920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.129 [2024-07-26 11:33:46.587100] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:51.129 [2024-07-26 11:33:46.587135] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:51.129 11:33:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.064 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.065 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:52.323 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.324 11:33:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.387 [2024-07-26 11:33:48.988666] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:53.387 [2024-07-26 11:33:48.988705] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:53.387 [2024-07-26 11:33:48.988732] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.676 [2024-07-26 11:33:49.076997] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:53.935 [2024-07-26 11:33:49.387720] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:53.935 [2024-07-26 11:33:49.387779] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:53.935 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.936 request: 00:25:53.936 { 00:25:53.936 "name": "nvme", 00:25:53.936 "trtype": "tcp", 00:25:53.936 "traddr": "10.0.0.2", 00:25:53.936 "adrfam": "ipv4", 00:25:53.936 "trsvcid": "8009", 00:25:53.936 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:53.936 "wait_for_attach": true, 00:25:53.936 "method": "bdev_nvme_start_discovery", 00:25:53.936 "req_id": 1 00:25:53.936 } 00:25:53.936 Got JSON-RPC error response 00:25:53.936 response: 00:25:53.936 { 00:25:53.936 "code": -17, 00:25:53.936 "message": "File exists" 00:25:53.936 } 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.936 request: 00:25:53.936 { 00:25:53.936 "name": "nvme_second", 00:25:53.936 "trtype": "tcp", 00:25:53.936 "traddr": "10.0.0.2", 00:25:53.936 "adrfam": "ipv4", 00:25:53.936 "trsvcid": "8009", 00:25:53.936 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:53.936 "wait_for_attach": true, 00:25:53.936 "method": "bdev_nvme_start_discovery", 00:25:53.936 "req_id": 1 00:25:53.936 } 00:25:53.936 Got JSON-RPC error response 00:25:53.936 response: 00:25:53.936 { 00:25:53.936 "code": -17, 00:25:53.936 "message": "File exists" 00:25:53.936 } 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:53.936 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.195 11:33:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.132 [2024-07-26 11:33:50.667546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:55.132 [2024-07-26 11:33:50.667607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f25b0 with addr=10.0.0.2, port=8010 00:25:55.132 [2024-07-26 11:33:50.667650] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:55.132 [2024-07-26 11:33:50.667666] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:55.132 [2024-07-26 11:33:50.667680] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:56.064 [2024-07-26 11:33:51.669937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-07-26 11:33:51.669998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f25b0 with addr=10.0.0.2, port=8010 00:25:56.064 [2024-07-26 11:33:51.670031] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:56.064 [2024-07-26 11:33:51.670047] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:56.064 [2024-07-26 11:33:51.670061] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:57.437 [2024-07-26 11:33:52.672031] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:57.437 request: 00:25:57.437 { 00:25:57.437 "name": "nvme_second", 00:25:57.437 "trtype": "tcp", 00:25:57.437 "traddr": "10.0.0.2", 00:25:57.437 "adrfam": "ipv4", 00:25:57.437 "trsvcid": "8010", 00:25:57.437 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:57.437 "wait_for_attach": false, 00:25:57.437 "attach_timeout_ms": 3000, 00:25:57.438 "method": "bdev_nvme_start_discovery", 00:25:57.438 "req_id": 1 00:25:57.438 } 00:25:57.438 Got JSON-RPC error response 00:25:57.438 response: 00:25:57.438 { 00:25:57.438 "code": -110, 00:25:57.438 "message": "Connection timed out" 00:25:57.438 } 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2195772 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.438 rmmod nvme_tcp 00:25:57.438 rmmod nvme_fabrics 00:25:57.438 rmmod nvme_keyring 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2195631 ']' 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2195631 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2195631 ']' 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2195631 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2195631 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2195631' 00:25:57.438 killing process with pid 2195631 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2195631 00:25:57.438 11:33:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2195631 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.697 11:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:59.601 00:25:59.601 real 0m16.907s 00:25:59.601 user 0m25.672s 00:25:59.601 sys 0m3.709s 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.601 ************************************ 00:25:59.601 END TEST nvmf_host_discovery 00:25:59.601 ************************************ 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.601 ************************************ 00:25:59.601 START TEST nvmf_host_multipath_status 00:25:59.601 ************************************ 00:25:59.601 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:59.861 * Looking for test storage... 00:25:59.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:59.861 11:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.395 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:02.396 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:02.396 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:02.396 Found net devices under 0000:84:00.0: cvl_0_0 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:02.396 Found net devices under 0000:84:00.1: cvl_0_1 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.396 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:02.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:26:02.397 00:26:02.397 --- 10.0.0.2 ping statistics --- 00:26:02.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.397 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:26:02.397 00:26:02.397 --- 10.0.0.1 ping statistics --- 00:26:02.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.397 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2199225 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2199225 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2199225 ']' 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.397 11:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.397 [2024-07-26 11:33:57.934555] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:26:02.397 [2024-07-26 11:33:57.934642] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.397 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.397 [2024-07-26 11:33:58.018266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:02.656 [2024-07-26 11:33:58.138954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.656 [2024-07-26 11:33:58.139011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.656 [2024-07-26 11:33:58.139029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.656 [2024-07-26 11:33:58.139042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.656 [2024-07-26 11:33:58.139053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.656 [2024-07-26 11:33:58.139149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.656 [2024-07-26 11:33:58.139157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2199225 00:26:02.656 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:02.914 [2024-07-26 11:33:58.564474] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.172 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:03.430 Malloc0 00:26:03.430 11:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:03.688 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:03.946 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.204 [2024-07-26 11:33:59.765966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.204 11:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:04.462 [2024-07-26 11:34:00.058870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2199509 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2199509 /var/tmp/bdevperf.sock 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2199509 ']' 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.462 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:05.029 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.029 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:05.029 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:05.287 11:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:05.852 Nvme0n1 00:26:05.852 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:06.418 Nvme0n1 00:26:06.418 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:06.418 11:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:08.318 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:08.318 11:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:08.576 11:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.140 11:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:10.074 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:10.074 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.074 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.074 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.332 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.332 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.332 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.332 11:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.956 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.956 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.956 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.956 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.214 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.214 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.214 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.214 11:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.781 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.781 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.781 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.781 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.346 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.346 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.346 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.346 11:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.604 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.604 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:12.604 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:13.169 11:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:13.427 11:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.801 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.368 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.368 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.368 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.368 11:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.626 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.626 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.626 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.626 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.885 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.885 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.885 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.885 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.143 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.143 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.143 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.143 11:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.710 11:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.710 11:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:16.710 11:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.969 11:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:17.536 11:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:18.469 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:18.469 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.469 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.469 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.727 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.727 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.727 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.727 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.293 11:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.859 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.859 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.859 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.859 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.117 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.117 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.117 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.117 11:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.684 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.684 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.684 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.684 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.943 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.943 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:20.943 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.201 11:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:21.766 11:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:22.700 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:22.700 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:22.700 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.700 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.958 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.958 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:22.958 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.958 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.216 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.216 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.216 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.217 11:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.475 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.475 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.475 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.475 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.041 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.041 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:24.041 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.041 11:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.608 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.608 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:24.608 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.608 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.867 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.867 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:24.867 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:25.127 11:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:25.693 11:34:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.099 11:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.665 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.665 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.665 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.665 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.232 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.232 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.232 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.232 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.490 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.490 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:28.490 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.490 11:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.748 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.748 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.748 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.748 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.005 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.005 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:29.005 11:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:29.571 11:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:29.829 11:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:30.763 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:30.763 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:30.764 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.764 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.330 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.330 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.330 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.330 11:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:31.588 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.588 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:31.588 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.588 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.845 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.845 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.845 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.845 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.411 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.411 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.411 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.411 11:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.669 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.669 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.669 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.669 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.928 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.928 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:33.186 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:33.186 11:34:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:33.752 11:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:34.011 11:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.385 11:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.952 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.952 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.952 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.952 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.210 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.210 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.210 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.210 11:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:36.777 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.777 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:36.777 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.777 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.036 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.036 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.036 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.036 11:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.603 11:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.603 11:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:37.603 11:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:37.862 11:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.428 11:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:39.363 11:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:39.363 11:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.363 11:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.363 11:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.621 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.621 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.621 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.621 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.880 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.880 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.880 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.880 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.446 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.446 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.446 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.446 11:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.705 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.705 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.705 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.705 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.963 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.963 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.963 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.963 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.223 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.223 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:41.223 11:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.823 11:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:42.081 11:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:43.016 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:43.016 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:43.016 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.016 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:43.294 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.294 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:43.294 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.294 11:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.867 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.867 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.867 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.867 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.124 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.124 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.124 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.124 11:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:44.688 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.688 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:44.688 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.688 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:44.947 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.947 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:44.947 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.947 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.205 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.205 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:45.205 11:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:45.463 11:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:46.030 11:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:46.964 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:46.964 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:46.964 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:46.964 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.222 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.222 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:47.222 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.222 11:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.480 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.480 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.480 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.480 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.046 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.046 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.046 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.046 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.304 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.304 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.304 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.304 11:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.562 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.562 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:48.562 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.562 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2199509 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2199509 ']' 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2199509 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2199509 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2199509' 00:26:49.127 killing process with pid 2199509 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2199509 00:26:49.127 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2199509 00:26:49.127 Connection closed with partial response: 00:26:49.127 00:26:49.127 00:26:49.387 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2199509 00:26:49.387 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:49.387 [2024-07-26 11:34:00.127320] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:26:49.387 [2024-07-26 11:34:00.127414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199509 ] 00:26:49.387 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.387 [2024-07-26 11:34:00.196567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.387 [2024-07-26 11:34:00.317871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.387 Running I/O for 90 seconds... 00:26:49.387 [2024-07-26 11:34:20.769166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.387 [2024-07-26 11:34:20.769237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.769979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.769997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.770734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.770771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.770802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.387 [2024-07-26 11:34:20.770822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:49.387 [2024-07-26 11:34:20.770849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.770867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.770892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.770910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.770938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.770956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.770981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.770998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.771961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.771987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.388 [2024-07-26 11:34:20.772588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:49.388 [2024-07-26 11:34:20.772616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.772633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.772679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.772725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.772777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.389 [2024-07-26 11:34:20.772871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.389 [2024-07-26 11:34:20.772917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.772963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.772991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.773966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.773994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:49.389 [2024-07-26 11:34:20.774593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.389 [2024-07-26 11:34:20.774611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.774954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.774974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.775982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.775999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:20.776334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.390 [2024-07-26 11:34:20.776352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:41.474491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.390 [2024-07-26 11:34:41.474555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:41.474624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.390 [2024-07-26 11:34:41.474648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:41.474676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.390 [2024-07-26 11:34:41.474695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:49.390 [2024-07-26 11:34:41.474721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.474739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.474763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.474781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.474818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.474837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.474862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.474881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.474905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.474923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.474947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.474965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.474989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.391 [2024-07-26 11:34:41.475007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.391 [2024-07-26 11:34:41.475049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.475794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.391 [2024-07-26 11:34:41.475821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.477809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.391 [2024-07-26 11:34:41.477839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.477867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.391 [2024-07-26 11:34:41.477886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:49.391 [2024-07-26 11:34:41.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:49.391 [2024-07-26 11:34:41.477929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:49.391 Received shutdown signal, test time was about 42.562121 seconds 00:26:49.391 00:26:49.391 Latency(us) 00:26:49.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.391 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:49.391 Verification LBA range: start 0x0 length 0x4000 00:26:49.391 Nvme0n1 : 42.56 7398.36 28.90 0.00 0.00 17270.83 218.45 5020737.23 00:26:49.391 =================================================================================================================== 00:26:49.391 Total : 7398.36 28.90 0.00 0.00 17270.83 218.45 5020737.23 00:26:49.391 11:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.649 rmmod nvme_tcp 00:26:49.649 rmmod nvme_fabrics 00:26:49.649 rmmod nvme_keyring 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2199225 ']' 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2199225 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2199225 ']' 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2199225 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2199225 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2199225' 00:26:49.649 killing process with pid 2199225 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2199225 00:26:49.649 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2199225 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.216 11:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:52.120 00:26:52.120 real 0m52.459s 00:26:52.120 user 2m43.650s 00:26:52.120 sys 0m14.271s 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:52.120 ************************************ 00:26:52.120 END TEST nvmf_host_multipath_status 00:26:52.120 ************************************ 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.120 ************************************ 00:26:52.120 START TEST nvmf_discovery_remove_ifc 00:26:52.120 ************************************ 00:26:52.120 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:52.378 * Looking for test storage... 00:26:52.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.378 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.379 11:34:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:54.911 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:54.912 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:54.912 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:54.912 Found net devices under 0000:84:00.0: cvl_0_0 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:54.912 Found net devices under 0000:84:00.1: cvl_0_1 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:54.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:26:54.912 00:26:54.912 --- 10.0.0.2 ping statistics --- 00:26:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.912 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:26:54.912 00:26:54.912 --- 10.0.0.1 ping statistics --- 00:26:54.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.912 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2206886 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2206886 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2206886 ']' 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.912 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.912 [2024-07-26 11:34:50.496070] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:26:54.912 [2024-07-26 11:34:50.496164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.912 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.171 [2024-07-26 11:34:50.575533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.171 [2024-07-26 11:34:50.718089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.171 [2024-07-26 11:34:50.718167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.171 [2024-07-26 11:34:50.718188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.171 [2024-07-26 11:34:50.718205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.171 [2024-07-26 11:34:50.718220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.171 [2024-07-26 11:34:50.718259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.430 11:34:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.430 [2024-07-26 11:34:51.008959] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.430 [2024-07-26 11:34:51.017191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:55.430 null0 00:26:55.430 [2024-07-26 11:34:51.049100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2206915 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2206915 /tmp/host.sock 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2206915 ']' 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:55.430 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.430 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.688 [2024-07-26 11:34:51.121654] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:26:55.688 [2024-07-26 11:34:51.121757] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2206915 ] 00:26:55.688 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.688 [2024-07-26 11:34:51.191180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.688 [2024-07-26 11:34:51.317419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.947 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.205 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.205 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:56.205 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.205 11:34:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.138 [2024-07-26 11:34:52.718289] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:57.138 [2024-07-26 11:34:52.718329] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:57.138 [2024-07-26 11:34:52.718356] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.415 [2024-07-26 11:34:52.847770] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:57.415 [2024-07-26 11:34:52.908879] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:57.415 [2024-07-26 11:34:52.908953] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:57.415 [2024-07-26 11:34:52.909001] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:57.415 [2024-07-26 11:34:52.909030] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:57.415 [2024-07-26 11:34:52.909065] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.415 [2024-07-26 11:34:52.916007] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x185ce50 was disconnected and freed. delete nvme_qpair. 00:26:57.415 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.416 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:57.416 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:57.416 11:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.416 11:34:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.803 11:34:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.736 11:34:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.670 11:34:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.603 11:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.977 11:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.977 [2024-07-26 11:34:58.349476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:02.977 [2024-07-26 11:34:58.349558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.977 [2024-07-26 11:34:58.349582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.977 [2024-07-26 11:34:58.349601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.977 [2024-07-26 11:34:58.349616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.977 [2024-07-26 11:34:58.349631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.977 [2024-07-26 11:34:58.349646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.977 [2024-07-26 11:34:58.349661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.977 [2024-07-26 11:34:58.349676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.977 [2024-07-26 11:34:58.349691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.977 [2024-07-26 11:34:58.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.977 [2024-07-26 11:34:58.349720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823890 is same with the state(5) to be set 00:27:02.977 [2024-07-26 11:34:58.359492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1823890 (9): Bad file descriptor 00:27:02.977 [2024-07-26 11:34:58.369540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.912 [2024-07-26 11:34:59.429493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:03.912 [2024-07-26 11:34:59.429555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1823890 with addr=10.0.0.2, port=4420 00:27:03.912 [2024-07-26 11:34:59.429582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1823890 is same with the state(5) to be set 00:27:03.912 [2024-07-26 11:34:59.429629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1823890 (9): Bad file descriptor 00:27:03.912 [2024-07-26 11:34:59.430116] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:03.912 [2024-07-26 11:34:59.430166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:03.912 [2024-07-26 11:34:59.430186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:03.912 [2024-07-26 11:34:59.430203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:03.912 [2024-07-26 11:34:59.430235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.912 [2024-07-26 11:34:59.430256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.912 11:34:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.847 [2024-07-26 11:35:00.432777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.847 [2024-07-26 11:35:00.432839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.847 [2024-07-26 11:35:00.432856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:04.847 [2024-07-26 11:35:00.432872] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:04.847 [2024-07-26 11:35:00.432905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.847 [2024-07-26 11:35:00.432957] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:04.847 [2024-07-26 11:35:00.433015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.847 [2024-07-26 11:35:00.433041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.847 [2024-07-26 11:35:00.433062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.847 [2024-07-26 11:35:00.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.847 [2024-07-26 11:35:00.433092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.847 [2024-07-26 11:35:00.433107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.847 [2024-07-26 11:35:00.433123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.847 [2024-07-26 11:35:00.433137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.847 [2024-07-26 11:35:00.433153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.847 [2024-07-26 11:35:00.433168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.847 [2024-07-26 11:35:00.433182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:04.847 [2024-07-26 11:35:00.433240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1822cf0 (9): Bad file descriptor 00:27:04.847 [2024-07-26 11:35:00.434225] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:04.847 [2024-07-26 11:35:00.434256] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.847 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:05.105 11:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.038 11:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.971 [2024-07-26 11:35:02.452890] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.971 [2024-07-26 11:35:02.452918] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.971 [2024-07-26 11:35:02.452944] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.971 [2024-07-26 11:35:02.540235] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:07.229 11:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.229 [2024-07-26 11:35:02.724810] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.229 [2024-07-26 11:35:02.724868] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.229 [2024-07-26 11:35:02.724909] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.229 [2024-07-26 11:35:02.724938] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:07.229 [2024-07-26 11:35:02.724953] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.229 [2024-07-26 11:35:02.731654] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18667d0 was disconnected and freed. delete nvme_qpair. 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2206915 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2206915 ']' 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2206915 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2206915 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2206915' 00:27:08.165 killing process with pid 2206915 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2206915 00:27:08.165 11:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2206915 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.731 rmmod nvme_tcp 00:27:08.731 rmmod nvme_fabrics 00:27:08.731 rmmod nvme_keyring 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2206886 ']' 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2206886 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2206886 ']' 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2206886 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2206886 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2206886' 00:27:08.731 killing process with pid 2206886 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2206886 00:27:08.731 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2206886 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.991 11:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.528 00:27:11.528 real 0m18.847s 00:27:11.528 user 0m27.137s 00:27:11.528 sys 0m3.599s 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.528 ************************************ 00:27:11.528 END TEST nvmf_discovery_remove_ifc 00:27:11.528 ************************************ 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.528 ************************************ 00:27:11.528 START TEST nvmf_identify_kernel_target 00:27:11.528 ************************************ 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:11.528 * Looking for test storage... 00:27:11.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:11.528 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.529 11:35:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:13.431 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:13.431 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:13.431 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.432 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.690 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:13.691 Found net devices under 0000:84:00.0: cvl_0_0 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:13.691 Found net devices under 0000:84:00.1: cvl_0_1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:27:13.691 00:27:13.691 --- 10.0.0.2 ping statistics --- 00:27:13.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.691 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:13.691 00:27:13.691 --- 10.0.0.1 ping statistics --- 00:27:13.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.691 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:13.691 11:35:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.068 Waiting for block devices as requested 00:27:15.068 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:15.333 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:15.333 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:15.333 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:15.633 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:15.633 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:15.633 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:15.633 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:15.895 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:15.895 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:15.895 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:15.895 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:16.153 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:16.411 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:16.411 11:35:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:16.411 No valid GPT data, bailing 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:16.411 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:16.412 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:16.670 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:16.670 00:27:16.670 Discovery Log Number of Records 2, Generation counter 2 00:27:16.670 =====Discovery Log Entry 0====== 00:27:16.670 trtype: tcp 00:27:16.670 adrfam: ipv4 00:27:16.670 subtype: current discovery subsystem 00:27:16.670 treq: not specified, sq flow control disable supported 00:27:16.670 portid: 1 00:27:16.670 trsvcid: 4420 00:27:16.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:16.670 traddr: 10.0.0.1 00:27:16.670 eflags: none 00:27:16.670 sectype: none 00:27:16.670 =====Discovery Log Entry 1====== 00:27:16.670 trtype: tcp 00:27:16.670 adrfam: ipv4 00:27:16.670 subtype: nvme subsystem 00:27:16.670 treq: not specified, sq flow control disable supported 00:27:16.670 portid: 1 00:27:16.670 trsvcid: 4420 00:27:16.670 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:16.670 traddr: 10.0.0.1 00:27:16.670 eflags: none 00:27:16.670 sectype: none 00:27:16.670 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:16.670 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:16.670 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.670 ===================================================== 00:27:16.670 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:16.670 ===================================================== 00:27:16.670 Controller Capabilities/Features 00:27:16.670 ================================ 00:27:16.670 Vendor ID: 0000 00:27:16.670 Subsystem Vendor ID: 0000 00:27:16.670 Serial Number: aca28487d62f46218e6e 00:27:16.670 Model Number: Linux 00:27:16.670 Firmware Version: 6.7.0-68 00:27:16.670 Recommended Arb Burst: 0 00:27:16.670 IEEE OUI Identifier: 00 00 00 00:27:16.670 Multi-path I/O 00:27:16.670 May have multiple subsystem ports: No 00:27:16.670 May have multiple controllers: No 00:27:16.670 Associated with SR-IOV VF: No 00:27:16.670 Max Data Transfer Size: Unlimited 00:27:16.670 Max Number of Namespaces: 0 00:27:16.670 Max Number of I/O Queues: 1024 00:27:16.670 NVMe Specification Version (VS): 1.3 00:27:16.670 NVMe Specification Version (Identify): 1.3 00:27:16.670 Maximum Queue Entries: 1024 00:27:16.670 Contiguous Queues Required: No 00:27:16.670 Arbitration Mechanisms Supported 00:27:16.670 Weighted Round Robin: Not Supported 00:27:16.670 Vendor Specific: Not Supported 00:27:16.671 Reset Timeout: 7500 ms 00:27:16.671 Doorbell Stride: 4 bytes 00:27:16.671 NVM Subsystem Reset: Not Supported 00:27:16.671 Command Sets Supported 00:27:16.671 NVM Command Set: Supported 00:27:16.671 Boot Partition: Not Supported 00:27:16.671 Memory Page Size Minimum: 4096 bytes 00:27:16.671 Memory Page Size Maximum: 4096 bytes 00:27:16.671 Persistent Memory Region: Not Supported 00:27:16.671 Optional Asynchronous Events Supported 00:27:16.671 Namespace Attribute Notices: Not Supported 00:27:16.671 Firmware Activation Notices: Not Supported 00:27:16.671 ANA Change Notices: Not Supported 00:27:16.671 PLE Aggregate Log Change Notices: Not Supported 00:27:16.671 LBA Status Info Alert Notices: Not Supported 00:27:16.671 EGE Aggregate Log Change Notices: Not Supported 00:27:16.671 Normal NVM Subsystem Shutdown event: Not Supported 00:27:16.671 Zone Descriptor Change Notices: Not Supported 00:27:16.671 Discovery Log Change Notices: Supported 00:27:16.671 Controller Attributes 00:27:16.671 128-bit Host Identifier: Not Supported 00:27:16.671 Non-Operational Permissive Mode: Not Supported 00:27:16.671 NVM Sets: Not Supported 00:27:16.671 Read Recovery Levels: Not Supported 00:27:16.671 Endurance Groups: Not Supported 00:27:16.671 Predictable Latency Mode: Not Supported 00:27:16.671 Traffic Based Keep ALive: Not Supported 00:27:16.671 Namespace Granularity: Not Supported 00:27:16.671 SQ Associations: Not Supported 00:27:16.671 UUID List: Not Supported 00:27:16.671 Multi-Domain Subsystem: Not Supported 00:27:16.671 Fixed Capacity Management: Not Supported 00:27:16.671 Variable Capacity Management: Not Supported 00:27:16.671 Delete Endurance Group: Not Supported 00:27:16.671 Delete NVM Set: Not Supported 00:27:16.671 Extended LBA Formats Supported: Not Supported 00:27:16.671 Flexible Data Placement Supported: Not Supported 00:27:16.671 00:27:16.671 Controller Memory Buffer Support 00:27:16.671 ================================ 00:27:16.671 Supported: No 00:27:16.671 00:27:16.671 Persistent Memory Region Support 00:27:16.671 ================================ 00:27:16.671 Supported: No 00:27:16.671 00:27:16.671 Admin Command Set Attributes 00:27:16.671 ============================ 00:27:16.671 Security Send/Receive: Not Supported 00:27:16.671 Format NVM: Not Supported 00:27:16.671 Firmware Activate/Download: Not Supported 00:27:16.671 Namespace Management: Not Supported 00:27:16.671 Device Self-Test: Not Supported 00:27:16.671 Directives: Not Supported 00:27:16.671 NVMe-MI: Not Supported 00:27:16.671 Virtualization Management: Not Supported 00:27:16.671 Doorbell Buffer Config: Not Supported 00:27:16.671 Get LBA Status Capability: Not Supported 00:27:16.671 Command & Feature Lockdown Capability: Not Supported 00:27:16.671 Abort Command Limit: 1 00:27:16.671 Async Event Request Limit: 1 00:27:16.671 Number of Firmware Slots: N/A 00:27:16.671 Firmware Slot 1 Read-Only: N/A 00:27:16.671 Firmware Activation Without Reset: N/A 00:27:16.671 Multiple Update Detection Support: N/A 00:27:16.671 Firmware Update Granularity: No Information Provided 00:27:16.671 Per-Namespace SMART Log: No 00:27:16.671 Asymmetric Namespace Access Log Page: Not Supported 00:27:16.671 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:16.671 Command Effects Log Page: Not Supported 00:27:16.671 Get Log Page Extended Data: Supported 00:27:16.671 Telemetry Log Pages: Not Supported 00:27:16.671 Persistent Event Log Pages: Not Supported 00:27:16.671 Supported Log Pages Log Page: May Support 00:27:16.671 Commands Supported & Effects Log Page: Not Supported 00:27:16.671 Feature Identifiers & Effects Log Page:May Support 00:27:16.671 NVMe-MI Commands & Effects Log Page: May Support 00:27:16.671 Data Area 4 for Telemetry Log: Not Supported 00:27:16.671 Error Log Page Entries Supported: 1 00:27:16.671 Keep Alive: Not Supported 00:27:16.671 00:27:16.671 NVM Command Set Attributes 00:27:16.671 ========================== 00:27:16.671 Submission Queue Entry Size 00:27:16.671 Max: 1 00:27:16.671 Min: 1 00:27:16.671 Completion Queue Entry Size 00:27:16.671 Max: 1 00:27:16.671 Min: 1 00:27:16.671 Number of Namespaces: 0 00:27:16.671 Compare Command: Not Supported 00:27:16.671 Write Uncorrectable Command: Not Supported 00:27:16.671 Dataset Management Command: Not Supported 00:27:16.671 Write Zeroes Command: Not Supported 00:27:16.671 Set Features Save Field: Not Supported 00:27:16.671 Reservations: Not Supported 00:27:16.671 Timestamp: Not Supported 00:27:16.671 Copy: Not Supported 00:27:16.671 Volatile Write Cache: Not Present 00:27:16.671 Atomic Write Unit (Normal): 1 00:27:16.671 Atomic Write Unit (PFail): 1 00:27:16.671 Atomic Compare & Write Unit: 1 00:27:16.671 Fused Compare & Write: Not Supported 00:27:16.671 Scatter-Gather List 00:27:16.671 SGL Command Set: Supported 00:27:16.671 SGL Keyed: Not Supported 00:27:16.671 SGL Bit Bucket Descriptor: Not Supported 00:27:16.671 SGL Metadata Pointer: Not Supported 00:27:16.671 Oversized SGL: Not Supported 00:27:16.671 SGL Metadata Address: Not Supported 00:27:16.671 SGL Offset: Supported 00:27:16.671 Transport SGL Data Block: Not Supported 00:27:16.671 Replay Protected Memory Block: Not Supported 00:27:16.671 00:27:16.671 Firmware Slot Information 00:27:16.671 ========================= 00:27:16.671 Active slot: 0 00:27:16.671 00:27:16.671 00:27:16.671 Error Log 00:27:16.671 ========= 00:27:16.671 00:27:16.671 Active Namespaces 00:27:16.671 ================= 00:27:16.671 Discovery Log Page 00:27:16.671 ================== 00:27:16.671 Generation Counter: 2 00:27:16.671 Number of Records: 2 00:27:16.671 Record Format: 0 00:27:16.671 00:27:16.671 Discovery Log Entry 0 00:27:16.671 ---------------------- 00:27:16.671 Transport Type: 3 (TCP) 00:27:16.671 Address Family: 1 (IPv4) 00:27:16.671 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:16.671 Entry Flags: 00:27:16.671 Duplicate Returned Information: 0 00:27:16.671 Explicit Persistent Connection Support for Discovery: 0 00:27:16.671 Transport Requirements: 00:27:16.671 Secure Channel: Not Specified 00:27:16.671 Port ID: 1 (0x0001) 00:27:16.671 Controller ID: 65535 (0xffff) 00:27:16.671 Admin Max SQ Size: 32 00:27:16.671 Transport Service Identifier: 4420 00:27:16.671 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:16.671 Transport Address: 10.0.0.1 00:27:16.671 Discovery Log Entry 1 00:27:16.671 ---------------------- 00:27:16.671 Transport Type: 3 (TCP) 00:27:16.671 Address Family: 1 (IPv4) 00:27:16.671 Subsystem Type: 2 (NVM Subsystem) 00:27:16.671 Entry Flags: 00:27:16.671 Duplicate Returned Information: 0 00:27:16.671 Explicit Persistent Connection Support for Discovery: 0 00:27:16.671 Transport Requirements: 00:27:16.671 Secure Channel: Not Specified 00:27:16.671 Port ID: 1 (0x0001) 00:27:16.671 Controller ID: 65535 (0xffff) 00:27:16.671 Admin Max SQ Size: 32 00:27:16.671 Transport Service Identifier: 4420 00:27:16.671 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:16.671 Transport Address: 10.0.0.1 00:27:16.671 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.930 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.930 get_feature(0x01) failed 00:27:16.930 get_feature(0x02) failed 00:27:16.930 get_feature(0x04) failed 00:27:16.930 ===================================================== 00:27:16.930 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:16.930 ===================================================== 00:27:16.930 Controller Capabilities/Features 00:27:16.930 ================================ 00:27:16.930 Vendor ID: 0000 00:27:16.930 Subsystem Vendor ID: 0000 00:27:16.930 Serial Number: 2ad560990458ba3e7aaf 00:27:16.930 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:16.930 Firmware Version: 6.7.0-68 00:27:16.930 Recommended Arb Burst: 6 00:27:16.930 IEEE OUI Identifier: 00 00 00 00:27:16.930 Multi-path I/O 00:27:16.930 May have multiple subsystem ports: Yes 00:27:16.930 May have multiple controllers: Yes 00:27:16.930 Associated with SR-IOV VF: No 00:27:16.930 Max Data Transfer Size: Unlimited 00:27:16.930 Max Number of Namespaces: 1024 00:27:16.930 Max Number of I/O Queues: 128 00:27:16.930 NVMe Specification Version (VS): 1.3 00:27:16.930 NVMe Specification Version (Identify): 1.3 00:27:16.930 Maximum Queue Entries: 1024 00:27:16.930 Contiguous Queues Required: No 00:27:16.930 Arbitration Mechanisms Supported 00:27:16.930 Weighted Round Robin: Not Supported 00:27:16.930 Vendor Specific: Not Supported 00:27:16.930 Reset Timeout: 7500 ms 00:27:16.930 Doorbell Stride: 4 bytes 00:27:16.930 NVM Subsystem Reset: Not Supported 00:27:16.930 Command Sets Supported 00:27:16.930 NVM Command Set: Supported 00:27:16.930 Boot Partition: Not Supported 00:27:16.930 Memory Page Size Minimum: 4096 bytes 00:27:16.930 Memory Page Size Maximum: 4096 bytes 00:27:16.930 Persistent Memory Region: Not Supported 00:27:16.930 Optional Asynchronous Events Supported 00:27:16.930 Namespace Attribute Notices: Supported 00:27:16.930 Firmware Activation Notices: Not Supported 00:27:16.930 ANA Change Notices: Supported 00:27:16.930 PLE Aggregate Log Change Notices: Not Supported 00:27:16.930 LBA Status Info Alert Notices: Not Supported 00:27:16.930 EGE Aggregate Log Change Notices: Not Supported 00:27:16.930 Normal NVM Subsystem Shutdown event: Not Supported 00:27:16.930 Zone Descriptor Change Notices: Not Supported 00:27:16.930 Discovery Log Change Notices: Not Supported 00:27:16.930 Controller Attributes 00:27:16.930 128-bit Host Identifier: Supported 00:27:16.930 Non-Operational Permissive Mode: Not Supported 00:27:16.930 NVM Sets: Not Supported 00:27:16.930 Read Recovery Levels: Not Supported 00:27:16.930 Endurance Groups: Not Supported 00:27:16.930 Predictable Latency Mode: Not Supported 00:27:16.930 Traffic Based Keep ALive: Supported 00:27:16.930 Namespace Granularity: Not Supported 00:27:16.930 SQ Associations: Not Supported 00:27:16.930 UUID List: Not Supported 00:27:16.930 Multi-Domain Subsystem: Not Supported 00:27:16.930 Fixed Capacity Management: Not Supported 00:27:16.930 Variable Capacity Management: Not Supported 00:27:16.930 Delete Endurance Group: Not Supported 00:27:16.930 Delete NVM Set: Not Supported 00:27:16.930 Extended LBA Formats Supported: Not Supported 00:27:16.930 Flexible Data Placement Supported: Not Supported 00:27:16.930 00:27:16.930 Controller Memory Buffer Support 00:27:16.930 ================================ 00:27:16.930 Supported: No 00:27:16.930 00:27:16.930 Persistent Memory Region Support 00:27:16.930 ================================ 00:27:16.930 Supported: No 00:27:16.930 00:27:16.930 Admin Command Set Attributes 00:27:16.930 ============================ 00:27:16.930 Security Send/Receive: Not Supported 00:27:16.930 Format NVM: Not Supported 00:27:16.930 Firmware Activate/Download: Not Supported 00:27:16.930 Namespace Management: Not Supported 00:27:16.930 Device Self-Test: Not Supported 00:27:16.930 Directives: Not Supported 00:27:16.930 NVMe-MI: Not Supported 00:27:16.930 Virtualization Management: Not Supported 00:27:16.930 Doorbell Buffer Config: Not Supported 00:27:16.930 Get LBA Status Capability: Not Supported 00:27:16.930 Command & Feature Lockdown Capability: Not Supported 00:27:16.930 Abort Command Limit: 4 00:27:16.930 Async Event Request Limit: 4 00:27:16.930 Number of Firmware Slots: N/A 00:27:16.930 Firmware Slot 1 Read-Only: N/A 00:27:16.930 Firmware Activation Without Reset: N/A 00:27:16.930 Multiple Update Detection Support: N/A 00:27:16.930 Firmware Update Granularity: No Information Provided 00:27:16.931 Per-Namespace SMART Log: Yes 00:27:16.931 Asymmetric Namespace Access Log Page: Supported 00:27:16.931 ANA Transition Time : 10 sec 00:27:16.931 00:27:16.931 Asymmetric Namespace Access Capabilities 00:27:16.931 ANA Optimized State : Supported 00:27:16.931 ANA Non-Optimized State : Supported 00:27:16.931 ANA Inaccessible State : Supported 00:27:16.931 ANA Persistent Loss State : Supported 00:27:16.931 ANA Change State : Supported 00:27:16.931 ANAGRPID is not changed : No 00:27:16.931 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:16.931 00:27:16.931 ANA Group Identifier Maximum : 128 00:27:16.931 Number of ANA Group Identifiers : 128 00:27:16.931 Max Number of Allowed Namespaces : 1024 00:27:16.931 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:16.931 Command Effects Log Page: Supported 00:27:16.931 Get Log Page Extended Data: Supported 00:27:16.931 Telemetry Log Pages: Not Supported 00:27:16.931 Persistent Event Log Pages: Not Supported 00:27:16.931 Supported Log Pages Log Page: May Support 00:27:16.931 Commands Supported & Effects Log Page: Not Supported 00:27:16.931 Feature Identifiers & Effects Log Page:May Support 00:27:16.931 NVMe-MI Commands & Effects Log Page: May Support 00:27:16.931 Data Area 4 for Telemetry Log: Not Supported 00:27:16.931 Error Log Page Entries Supported: 128 00:27:16.931 Keep Alive: Supported 00:27:16.931 Keep Alive Granularity: 1000 ms 00:27:16.931 00:27:16.931 NVM Command Set Attributes 00:27:16.931 ========================== 00:27:16.931 Submission Queue Entry Size 00:27:16.931 Max: 64 00:27:16.931 Min: 64 00:27:16.931 Completion Queue Entry Size 00:27:16.931 Max: 16 00:27:16.931 Min: 16 00:27:16.931 Number of Namespaces: 1024 00:27:16.931 Compare Command: Not Supported 00:27:16.931 Write Uncorrectable Command: Not Supported 00:27:16.931 Dataset Management Command: Supported 00:27:16.931 Write Zeroes Command: Supported 00:27:16.931 Set Features Save Field: Not Supported 00:27:16.931 Reservations: Not Supported 00:27:16.931 Timestamp: Not Supported 00:27:16.931 Copy: Not Supported 00:27:16.931 Volatile Write Cache: Present 00:27:16.931 Atomic Write Unit (Normal): 1 00:27:16.931 Atomic Write Unit (PFail): 1 00:27:16.931 Atomic Compare & Write Unit: 1 00:27:16.931 Fused Compare & Write: Not Supported 00:27:16.931 Scatter-Gather List 00:27:16.931 SGL Command Set: Supported 00:27:16.931 SGL Keyed: Not Supported 00:27:16.931 SGL Bit Bucket Descriptor: Not Supported 00:27:16.931 SGL Metadata Pointer: Not Supported 00:27:16.931 Oversized SGL: Not Supported 00:27:16.931 SGL Metadata Address: Not Supported 00:27:16.931 SGL Offset: Supported 00:27:16.931 Transport SGL Data Block: Not Supported 00:27:16.931 Replay Protected Memory Block: Not Supported 00:27:16.931 00:27:16.931 Firmware Slot Information 00:27:16.931 ========================= 00:27:16.931 Active slot: 0 00:27:16.931 00:27:16.931 Asymmetric Namespace Access 00:27:16.931 =========================== 00:27:16.931 Change Count : 0 00:27:16.931 Number of ANA Group Descriptors : 1 00:27:16.931 ANA Group Descriptor : 0 00:27:16.931 ANA Group ID : 1 00:27:16.931 Number of NSID Values : 1 00:27:16.931 Change Count : 0 00:27:16.931 ANA State : 1 00:27:16.931 Namespace Identifier : 1 00:27:16.931 00:27:16.931 Commands Supported and Effects 00:27:16.931 ============================== 00:27:16.931 Admin Commands 00:27:16.931 -------------- 00:27:16.931 Get Log Page (02h): Supported 00:27:16.931 Identify (06h): Supported 00:27:16.931 Abort (08h): Supported 00:27:16.931 Set Features (09h): Supported 00:27:16.931 Get Features (0Ah): Supported 00:27:16.931 Asynchronous Event Request (0Ch): Supported 00:27:16.931 Keep Alive (18h): Supported 00:27:16.931 I/O Commands 00:27:16.931 ------------ 00:27:16.931 Flush (00h): Supported 00:27:16.931 Write (01h): Supported LBA-Change 00:27:16.931 Read (02h): Supported 00:27:16.931 Write Zeroes (08h): Supported LBA-Change 00:27:16.931 Dataset Management (09h): Supported 00:27:16.931 00:27:16.931 Error Log 00:27:16.931 ========= 00:27:16.931 Entry: 0 00:27:16.931 Error Count: 0x3 00:27:16.931 Submission Queue Id: 0x0 00:27:16.931 Command Id: 0x5 00:27:16.931 Phase Bit: 0 00:27:16.931 Status Code: 0x2 00:27:16.931 Status Code Type: 0x0 00:27:16.931 Do Not Retry: 1 00:27:16.931 Error Location: 0x28 00:27:16.931 LBA: 0x0 00:27:16.931 Namespace: 0x0 00:27:16.931 Vendor Log Page: 0x0 00:27:16.931 ----------- 00:27:16.931 Entry: 1 00:27:16.931 Error Count: 0x2 00:27:16.931 Submission Queue Id: 0x0 00:27:16.931 Command Id: 0x5 00:27:16.931 Phase Bit: 0 00:27:16.931 Status Code: 0x2 00:27:16.931 Status Code Type: 0x0 00:27:16.931 Do Not Retry: 1 00:27:16.931 Error Location: 0x28 00:27:16.931 LBA: 0x0 00:27:16.931 Namespace: 0x0 00:27:16.931 Vendor Log Page: 0x0 00:27:16.931 ----------- 00:27:16.931 Entry: 2 00:27:16.931 Error Count: 0x1 00:27:16.931 Submission Queue Id: 0x0 00:27:16.931 Command Id: 0x4 00:27:16.931 Phase Bit: 0 00:27:16.931 Status Code: 0x2 00:27:16.931 Status Code Type: 0x0 00:27:16.931 Do Not Retry: 1 00:27:16.931 Error Location: 0x28 00:27:16.931 LBA: 0x0 00:27:16.931 Namespace: 0x0 00:27:16.931 Vendor Log Page: 0x0 00:27:16.931 00:27:16.931 Number of Queues 00:27:16.931 ================ 00:27:16.931 Number of I/O Submission Queues: 128 00:27:16.931 Number of I/O Completion Queues: 128 00:27:16.931 00:27:16.931 ZNS Specific Controller Data 00:27:16.931 ============================ 00:27:16.931 Zone Append Size Limit: 0 00:27:16.931 00:27:16.931 00:27:16.931 Active Namespaces 00:27:16.931 ================= 00:27:16.931 get_feature(0x05) failed 00:27:16.931 Namespace ID:1 00:27:16.931 Command Set Identifier: NVM (00h) 00:27:16.931 Deallocate: Supported 00:27:16.931 Deallocated/Unwritten Error: Not Supported 00:27:16.931 Deallocated Read Value: Unknown 00:27:16.931 Deallocate in Write Zeroes: Not Supported 00:27:16.931 Deallocated Guard Field: 0xFFFF 00:27:16.931 Flush: Supported 00:27:16.931 Reservation: Not Supported 00:27:16.931 Namespace Sharing Capabilities: Multiple Controllers 00:27:16.931 Size (in LBAs): 1953525168 (931GiB) 00:27:16.931 Capacity (in LBAs): 1953525168 (931GiB) 00:27:16.931 Utilization (in LBAs): 1953525168 (931GiB) 00:27:16.931 UUID: 974a2919-b405-4cd6-91c3-2839c10d1479 00:27:16.931 Thin Provisioning: Not Supported 00:27:16.931 Per-NS Atomic Units: Yes 00:27:16.931 Atomic Boundary Size (Normal): 0 00:27:16.931 Atomic Boundary Size (PFail): 0 00:27:16.931 Atomic Boundary Offset: 0 00:27:16.931 NGUID/EUI64 Never Reused: No 00:27:16.931 ANA group ID: 1 00:27:16.931 Namespace Write Protected: No 00:27:16.931 Number of LBA Formats: 1 00:27:16.931 Current LBA Format: LBA Format #00 00:27:16.931 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:16.931 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.931 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.932 rmmod nvme_tcp 00:27:16.932 rmmod nvme_fabrics 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.932 11:35:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.839 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:18.839 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:18.839 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:18.839 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:19.098 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.098 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:19.099 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:19.099 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.099 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:19.099 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:19.099 11:35:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:20.476 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:20.476 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:20.476 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:21.411 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:21.670 00:27:21.670 real 0m10.452s 00:27:21.670 user 0m2.218s 00:27:21.670 sys 0m4.233s 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.670 ************************************ 00:27:21.670 END TEST nvmf_identify_kernel_target 00:27:21.670 ************************************ 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.670 ************************************ 00:27:21.670 START TEST nvmf_auth_host 00:27:21.670 ************************************ 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:21.670 * Looking for test storage... 00:27:21.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.670 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:21.671 11:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:24.204 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.204 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:24.205 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:24.205 Found net devices under 0000:84:00.0: cvl_0_0 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:24.205 Found net devices under 0000:84:00.1: cvl_0_1 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.205 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:27:24.462 00:27:24.462 --- 10.0.0.2 ping statistics --- 00:27:24.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.462 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:27:24.462 00:27:24.462 --- 10.0.0.1 ping statistics --- 00:27:24.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.462 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.462 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2214279 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2214279 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2214279 ']' 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.463 11:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=76b399df4626665d8c6d3a780cb7463b 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VbP 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 76b399df4626665d8c6d3a780cb7463b 0 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 76b399df4626665d8c6d3a780cb7463b 0 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=76b399df4626665d8c6d3a780cb7463b 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:24.720 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VbP 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VbP 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.VbP 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1170bc95643901bac10cf1e019f6347b3e31313095d61abc390920da9e1d1965 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Q6I 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1170bc95643901bac10cf1e019f6347b3e31313095d61abc390920da9e1d1965 3 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1170bc95643901bac10cf1e019f6347b3e31313095d61abc390920da9e1d1965 3 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1170bc95643901bac10cf1e019f6347b3e31313095d61abc390920da9e1d1965 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Q6I 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Q6I 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Q6I 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4a2fdd106ad42244a26247c628637553fe360e3298e6653 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XrY 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4a2fdd106ad42244a26247c628637553fe360e3298e6653 0 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4a2fdd106ad42244a26247c628637553fe360e3298e6653 0 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4a2fdd106ad42244a26247c628637553fe360e3298e6653 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XrY 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XrY 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XrY 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=19d95621477502ace426372bafdab87d113457978243f5f0 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XgS 00:27:24.977 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 19d95621477502ace426372bafdab87d113457978243f5f0 2 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 19d95621477502ace426372bafdab87d113457978243f5f0 2 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=19d95621477502ace426372bafdab87d113457978243f5f0 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XgS 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XgS 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XgS 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b873ae2201075ae3552a0a21c02d7820 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.M7x 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b873ae2201075ae3552a0a21c02d7820 1 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b873ae2201075ae3552a0a21c02d7820 1 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b873ae2201075ae3552a0a21c02d7820 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:24.978 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.M7x 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.M7x 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.M7x 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6dbd64e81f78d42afcb51290aff90be9 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0Rn 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6dbd64e81f78d42afcb51290aff90be9 1 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6dbd64e81f78d42afcb51290aff90be9 1 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6dbd64e81f78d42afcb51290aff90be9 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0Rn 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0Rn 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0Rn 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07a88435463398a050dc5764dfdd4fd876495cf023ddd5a7 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ifR 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07a88435463398a050dc5764dfdd4fd876495cf023ddd5a7 2 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07a88435463398a050dc5764dfdd4fd876495cf023ddd5a7 2 00:27:25.235 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07a88435463398a050dc5764dfdd4fd876495cf023ddd5a7 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ifR 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ifR 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ifR 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8f767d46f7b97fd4324075ad3c17ebb 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zxI 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8f767d46f7b97fd4324075ad3c17ebb 0 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8f767d46f7b97fd4324075ad3c17ebb 0 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8f767d46f7b97fd4324075ad3c17ebb 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zxI 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zxI 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zxI 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc14b1abca0edc4c151b1917c19c68929b042d7a3973ca2603918b4aa6660d68 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xIr 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc14b1abca0edc4c151b1917c19c68929b042d7a3973ca2603918b4aa6660d68 3 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc14b1abca0edc4c151b1917c19c68929b042d7a3973ca2603918b4aa6660d68 3 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc14b1abca0edc4c151b1917c19c68929b042d7a3973ca2603918b4aa6660d68 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xIr 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xIr 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xIr 00:27:25.236 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2214279 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2214279 ']' 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:25.493 11:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VbP 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Q6I ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Q6I 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XrY 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XgS ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XgS 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.M7x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0Rn ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Rn 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ifR 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zxI ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zxI 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xIr 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:25.751 11:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:27.123 Waiting for block devices as requested 00:27:27.123 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:27.380 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:27.380 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:27.637 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:27.637 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:27.637 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:27.637 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:27.893 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:27.893 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:27.893 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:27.893 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:28.150 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:28.150 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:28.150 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:28.150 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:28.407 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:28.407 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:28.971 No valid GPT data, bailing 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:28.971 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:29.229 00:27:29.229 Discovery Log Number of Records 2, Generation counter 2 00:27:29.229 =====Discovery Log Entry 0====== 00:27:29.229 trtype: tcp 00:27:29.229 adrfam: ipv4 00:27:29.229 subtype: current discovery subsystem 00:27:29.229 treq: not specified, sq flow control disable supported 00:27:29.229 portid: 1 00:27:29.229 trsvcid: 4420 00:27:29.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:29.229 traddr: 10.0.0.1 00:27:29.229 eflags: none 00:27:29.229 sectype: none 00:27:29.229 =====Discovery Log Entry 1====== 00:27:29.229 trtype: tcp 00:27:29.229 adrfam: ipv4 00:27:29.229 subtype: nvme subsystem 00:27:29.229 treq: not specified, sq flow control disable supported 00:27:29.229 portid: 1 00:27:29.229 trsvcid: 4420 00:27:29.229 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:29.229 traddr: 10.0.0.1 00:27:29.229 eflags: none 00:27:29.229 sectype: none 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.229 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.230 nvme0n1 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.230 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.488 11:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.488 nvme0n1 00:27:29.488 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.488 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.488 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.488 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.488 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.488 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.778 nvme0n1 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.778 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.037 nvme0n1 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.037 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.296 nvme0n1 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.296 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.555 11:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.555 nvme0n1 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.555 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.813 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.814 nvme0n1 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.814 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:31.072 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.073 nvme0n1 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.073 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.373 nvme0n1 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.373 11:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:31.631 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.632 nvme0n1 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.632 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.890 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.891 nvme0n1 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.891 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.149 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.407 nvme0n1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.407 11:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.974 nvme0n1 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.974 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.975 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 nvme0n1 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.233 11:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.491 nvme0n1 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.491 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.492 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.750 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.007 nvme0n1 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.007 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.008 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.008 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.008 11:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.573 nvme0n1 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.573 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.831 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.832 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.398 nvme0n1 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.398 11:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.398 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.399 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.334 nvme0n1 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.334 11:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.900 nvme0n1 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.900 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.158 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.159 11:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.725 nvme0n1 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.725 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.726 11:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 nvme0n1 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.098 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.099 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.099 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.099 11:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.032 nvme0n1 00:27:40.032 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.032 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.032 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.032 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.032 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.032 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.291 11:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.226 nvme0n1 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.226 11:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 nvme0n1 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.600 11:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 nvme0n1 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.534 11:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 nvme0n1 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.534 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.793 nvme0n1 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.793 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.052 nvme0n1 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.052 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.310 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.311 nvme0n1 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.311 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.569 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.569 11:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.569 nvme0n1 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:44.569 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.570 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.828 nvme0n1 00:27:44.828 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.828 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.828 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.828 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.828 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.828 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.086 nvme0n1 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.086 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.344 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.345 11:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.603 nvme0n1 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.603 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.604 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.862 nvme0n1 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.862 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.863 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.863 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.863 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.863 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.863 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.122 nvme0n1 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.122 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.123 11:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.413 nvme0n1 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.413 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.672 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.931 nvme0n1 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.931 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.189 nvme0n1 00:27:47.189 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.189 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.189 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.189 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.189 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.189 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.447 11:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.705 nvme0n1 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.705 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.706 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.279 nvme0n1 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.279 11:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.845 nvme0n1 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.845 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.103 11:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.669 nvme0n1 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.669 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.235 nvme0n1 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:50.235 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:50.492 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:50.492 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.493 11:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.058 nvme0n1 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.058 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.059 11:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.624 nvme0n1 00:27:51.624 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.624 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.624 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.624 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.624 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.624 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:51.882 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.883 11:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.816 nvme0n1 00:27:52.816 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.816 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.816 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.816 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.816 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.075 11:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.009 nvme0n1 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.009 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.267 11:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.199 nvme0n1 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.199 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.457 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.458 11:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.392 nvme0n1 00:27:56.392 11:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.392 11:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.392 11:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.392 11:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.392 11:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.392 11:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.392 11:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.766 nvme0n1 00:27:57.766 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.766 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.766 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.766 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.766 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.766 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.767 nvme0n1 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.767 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.025 nvme0n1 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.025 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.026 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.283 nvme0n1 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.283 11:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.541 nvme0n1 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.542 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.800 nvme0n1 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.800 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.058 nvme0n1 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:27:59.058 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.059 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.317 nvme0n1 00:27:59.317 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.317 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.317 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.317 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.317 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.317 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.574 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.575 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.575 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.575 11:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.575 nvme0n1 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.575 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.831 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.832 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.088 nvme0n1 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.088 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.089 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.089 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.089 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.346 nvme0n1 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.346 11:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.603 nvme0n1 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.603 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.604 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 nvme0n1 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.192 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.461 nvme0n1 00:28:01.461 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.461 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.461 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.461 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.461 11:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.461 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.462 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.462 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.462 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.026 nvme0n1 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.026 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 nvme0n1 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.283 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.541 11:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.126 nvme0n1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.126 11:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.696 nvme0n1 00:28:03.696 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.696 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.696 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.696 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.696 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.696 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.954 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.518 nvme0n1 00:28:04.518 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.518 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.518 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.518 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.518 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.518 11:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.518 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.518 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.518 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.518 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.519 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.082 nvme0n1 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.082 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.083 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.339 11:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.905 nvme0n1 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzZiMzk5ZGY0NjI2NjY1ZDhjNmQzYTc4MGNiNzQ2M2ICW9Hx: 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTE3MGJjOTU2NDM5MDFiYWMxMGNmMWUwMTlmNjM0N2IzZTMxMzEzMDk1ZDYxYWJjMzkwOTIwZGE5ZTFkMTk2NcHZjUM=: 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.905 11:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.277 nvme0n1 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.277 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.278 11:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.215 nvme0n1 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yjg3M2FlMjIwMTA3NWFlMzU1MmEwYTIxYzAyZDc4MjCl/VfJ: 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmRiZDY0ZTgxZjc4ZDQyYWZjYjUxMjkwYWZmOTBiZTnwujbC: 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.215 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.216 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.216 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.216 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.216 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.216 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.216 11:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 nvme0n1 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDdhODg0MzU0NjMzOThhMDUwZGM1NzY0ZGZkZDRmZDg3NjQ5NWNmMDIzZGRkNWE3ldf/oA==: 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThmNzY3ZDQ2ZjdiOTdmZDQzMjQwNzVhZDNjMTdlYmIuu2Sg: 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.586 11:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.957 nvme0n1 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MxNGIxYWJjYTBlZGM0YzE1MWIxOTE3YzE5YzY4OTI5YjA0MmQ3YTM5NzNjYTI2MDM5MThiNGFhNjY2MGQ2OCNTbU0=: 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.957 11:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.889 nvme0n1 00:28:11.889 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.889 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.889 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.889 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.889 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.889 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRhMmZkZDEwNmFkNDIyNDRhMjYyNDdjNjI4NjM3NTUzZmUzNjBlMzI5OGU2NjUz7SbmPQ==: 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTlkOTU2MjE0Nzc1MDJhY2U0MjYzNzJiYWZkYWI4N2QxMTM0NTc5NzgyNDNmNWYwR+n21g==: 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.890 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.148 request: 00:28:12.148 { 00:28:12.148 "name": "nvme0", 00:28:12.148 "trtype": "tcp", 00:28:12.148 "traddr": "10.0.0.1", 00:28:12.148 "adrfam": "ipv4", 00:28:12.148 "trsvcid": "4420", 00:28:12.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:12.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:12.148 "prchk_reftag": false, 00:28:12.148 "prchk_guard": false, 00:28:12.148 "hdgst": false, 00:28:12.148 "ddgst": false, 00:28:12.148 "method": "bdev_nvme_attach_controller", 00:28:12.148 "req_id": 1 00:28:12.148 } 00:28:12.148 Got JSON-RPC error response 00:28:12.148 response: 00:28:12.148 { 00:28:12.148 "code": -5, 00:28:12.148 "message": "Input/output error" 00:28:12.148 } 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.148 request: 00:28:12.148 { 00:28:12.148 "name": "nvme0", 00:28:12.148 "trtype": "tcp", 00:28:12.148 "traddr": "10.0.0.1", 00:28:12.148 "adrfam": "ipv4", 00:28:12.148 "trsvcid": "4420", 00:28:12.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:12.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:12.148 "prchk_reftag": false, 00:28:12.148 "prchk_guard": false, 00:28:12.148 "hdgst": false, 00:28:12.148 "ddgst": false, 00:28:12.148 "dhchap_key": "key2", 00:28:12.148 "method": "bdev_nvme_attach_controller", 00:28:12.148 "req_id": 1 00:28:12.148 } 00:28:12.148 Got JSON-RPC error response 00:28:12.148 response: 00:28:12.148 { 00:28:12.148 "code": -5, 00:28:12.148 "message": "Input/output error" 00:28:12.148 } 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.148 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.406 request: 00:28:12.406 { 00:28:12.406 "name": "nvme0", 00:28:12.406 "trtype": "tcp", 00:28:12.406 "traddr": "10.0.0.1", 00:28:12.406 "adrfam": "ipv4", 00:28:12.406 "trsvcid": "4420", 00:28:12.406 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:12.406 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:12.406 "prchk_reftag": false, 00:28:12.406 "prchk_guard": false, 00:28:12.406 "hdgst": false, 00:28:12.406 "ddgst": false, 00:28:12.406 "dhchap_key": "key1", 00:28:12.406 "dhchap_ctrlr_key": "ckey2", 00:28:12.406 "method": "bdev_nvme_attach_controller", 00:28:12.406 "req_id": 1 00:28:12.406 } 00:28:12.406 Got JSON-RPC error response 00:28:12.406 response: 00:28:12.406 { 00:28:12.406 "code": -5, 00:28:12.406 "message": "Input/output error" 00:28:12.406 } 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.406 rmmod nvme_tcp 00:28:12.406 rmmod nvme_fabrics 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2214279 ']' 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2214279 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2214279 ']' 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2214279 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2214279 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2214279' 00:28:12.406 killing process with pid 2214279 00:28:12.406 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2214279 00:28:12.407 11:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2214279 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.665 11:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.569 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.569 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:14.569 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:14.828 11:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:16.732 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:16.732 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:16.732 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:17.298 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:17.557 11:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.VbP /tmp/spdk.key-null.XrY /tmp/spdk.key-sha256.M7x /tmp/spdk.key-sha384.ifR /tmp/spdk.key-sha512.xIr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:17.557 11:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:18.994 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:18.994 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:18.994 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:18.994 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:18.994 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:18.994 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:18.994 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:18.994 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:18.994 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:18.994 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:18.994 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:18.994 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:18.994 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:18.994 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:18.994 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:18.994 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:18.994 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:19.254 00:28:19.254 real 0m57.576s 00:28:19.254 user 0m55.353s 00:28:19.254 sys 0m7.474s 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.254 ************************************ 00:28:19.254 END TEST nvmf_auth_host 00:28:19.254 ************************************ 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.254 ************************************ 00:28:19.254 START TEST nvmf_digest 00:28:19.254 ************************************ 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:19.254 * Looking for test storage... 00:28:19.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.254 11:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.838 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:21.839 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:21.839 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:21.839 Found net devices under 0000:84:00.0: cvl_0_0 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.839 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:22.097 Found net devices under 0000:84:00.1: cvl_0_1 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.097 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:22.098 00:28:22.098 --- 10.0.0.2 ping statistics --- 00:28:22.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.098 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:28:22.098 00:28:22.098 --- 10.0.0.1 ping statistics --- 00:28:22.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.098 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.098 ************************************ 00:28:22.098 START TEST nvmf_digest_clean 00:28:22.098 ************************************ 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2225255 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2225255 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2225255 ']' 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.098 11:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.356 [2024-07-26 11:36:17.770965] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:22.356 [2024-07-26 11:36:17.771069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.356 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.356 [2024-07-26 11:36:17.855541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.356 [2024-07-26 11:36:17.979827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.356 [2024-07-26 11:36:17.979888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.356 [2024-07-26 11:36:17.979905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.356 [2024-07-26 11:36:17.979918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.356 [2024-07-26 11:36:17.979930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.356 [2024-07-26 11:36:17.979966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.615 null0 00:28:22.615 [2024-07-26 11:36:18.170579] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.615 [2024-07-26 11:36:18.194810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2225338 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2225338 /var/tmp/bperf.sock 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2225338 ']' 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.615 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.615 [2024-07-26 11:36:18.246025] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:22.615 [2024-07-26 11:36:18.246100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225338 ] 00:28:22.873 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.873 [2024-07-26 11:36:18.312763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.873 [2024-07-26 11:36:18.434175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.131 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:23.131 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:23.131 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.131 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.131 11:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.697 11:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.697 11:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.955 nvme0n1 00:28:23.955 11:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:23.955 11:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.213 Running I/O for 2 seconds... 00:28:26.745 00:28:26.745 Latency(us) 00:28:26.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.745 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:26.745 nvme0n1 : 2.05 17450.55 68.17 0.00 0.00 7183.10 3106.89 48351.00 00:28:26.745 =================================================================================================================== 00:28:26.745 Total : 17450.55 68.17 0.00 0.00 7183.10 3106.89 48351.00 00:28:26.745 0 00:28:26.745 11:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.745 11:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.745 11:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.745 11:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.745 11:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.745 | select(.opcode=="crc32c") 00:28:26.745 | "\(.module_name) \(.executed)"' 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2225338 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2225338 ']' 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2225338 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2225338 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2225338' 00:28:26.745 killing process with pid 2225338 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2225338 00:28:26.745 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.745 00:28:26.745 Latency(us) 00:28:26.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.745 =================================================================================================================== 00:28:26.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.745 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2225338 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2225866 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2225866 /var/tmp/bperf.sock 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2225866 ']' 00:28:27.002 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.003 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.003 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.003 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.003 11:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:27.003 [2024-07-26 11:36:22.566878] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:27.003 [2024-07-26 11:36:22.567052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225866 ] 00:28:27.003 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.003 Zero copy mechanism will not be used. 00:28:27.003 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.260 [2024-07-26 11:36:22.676668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.260 [2024-07-26 11:36:22.801402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.192 11:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.192 11:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:28.192 11:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:28.192 11:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:28.192 11:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:28.756 11:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.756 11:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.014 nvme0n1 00:28:29.014 11:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:29.014 11:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.271 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.271 Zero copy mechanism will not be used. 00:28:29.271 Running I/O for 2 seconds... 00:28:31.168 00:28:31.168 Latency(us) 00:28:31.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.168 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:31.168 nvme0n1 : 2.01 2756.76 344.59 0.00 0.00 5799.36 1201.49 15146.10 00:28:31.168 =================================================================================================================== 00:28:31.168 Total : 2756.76 344.59 0.00 0.00 5799.36 1201.49 15146.10 00:28:31.168 0 00:28:31.168 11:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:31.168 11:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:31.168 11:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:31.169 11:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:31.169 11:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:31.169 | select(.opcode=="crc32c") 00:28:31.169 | "\(.module_name) \(.executed)"' 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2225866 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2225866 ']' 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2225866 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2225866 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2225866' 00:28:31.734 killing process with pid 2225866 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2225866 00:28:31.734 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.734 00:28:31.734 Latency(us) 00:28:31.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.734 =================================================================================================================== 00:28:31.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.734 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2225866 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2226413 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2226413 /var/tmp/bperf.sock 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2226413 ']' 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.992 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.993 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:31.993 [2024-07-26 11:36:27.588032] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:31.993 [2024-07-26 11:36:27.588122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226413 ] 00:28:31.993 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.251 [2024-07-26 11:36:27.655550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.251 [2024-07-26 11:36:27.777096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.251 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.251 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:32.251 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:32.251 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:32.251 11:36:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.817 11:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.817 11:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.075 nvme0n1 00:28:33.075 11:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:33.075 11:36:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.333 Running I/O for 2 seconds... 00:28:35.231 00:28:35.231 Latency(us) 00:28:35.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.231 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.231 nvme0n1 : 2.01 20181.87 78.84 0.00 0.00 6335.55 2633.58 14466.47 00:28:35.231 =================================================================================================================== 00:28:35.231 Total : 20181.87 78.84 0.00 0.00 6335.55 2633.58 14466.47 00:28:35.231 0 00:28:35.489 11:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:35.489 11:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:35.489 11:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:35.489 11:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:35.489 | select(.opcode=="crc32c") 00:28:35.489 | "\(.module_name) \(.executed)"' 00:28:35.489 11:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:35.778 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:35.778 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:35.778 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.778 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.778 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2226413 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2226413 ']' 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2226413 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2226413 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2226413' 00:28:35.779 killing process with pid 2226413 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2226413 00:28:35.779 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.779 00:28:35.779 Latency(us) 00:28:35.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.779 =================================================================================================================== 00:28:35.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.779 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2226413 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2226946 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2226946 /var/tmp/bperf.sock 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2226946 ']' 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.068 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:36.068 [2024-07-26 11:36:31.663114] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:36.068 [2024-07-26 11:36:31.663191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226946 ] 00:28:36.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.068 Zero copy mechanism will not be used. 00:28:36.068 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.068 [2024-07-26 11:36:31.724631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.326 [2024-07-26 11:36:31.845540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.326 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.326 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:36.326 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:36.326 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:36.326 11:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.892 11:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.893 11:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.458 nvme0n1 00:28:37.458 11:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:37.458 11:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.458 Zero copy mechanism will not be used. 00:28:37.458 Running I/O for 2 seconds... 00:28:39.357 00:28:39.357 Latency(us) 00:28:39.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.357 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:39.357 nvme0n1 : 2.00 3297.14 412.14 0.00 0.00 4842.23 3932.16 10145.94 00:28:39.357 =================================================================================================================== 00:28:39.357 Total : 3297.14 412.14 0.00 0.00 4842.23 3932.16 10145.94 00:28:39.357 0 00:28:39.357 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:39.357 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:39.614 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:39.614 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:39.614 | select(.opcode=="crc32c") 00:28:39.614 | "\(.module_name) \(.executed)"' 00:28:39.614 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:39.871 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:39.871 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:39.871 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:39.871 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:39.871 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2226946 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2226946 ']' 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2226946 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2226946 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2226946' 00:28:39.872 killing process with pid 2226946 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2226946 00:28:39.872 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.872 00:28:39.872 Latency(us) 00:28:39.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.872 =================================================================================================================== 00:28:39.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.872 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2226946 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2225255 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2225255 ']' 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2225255 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2225255 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2225255' 00:28:40.129 killing process with pid 2225255 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2225255 00:28:40.129 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2225255 00:28:40.387 00:28:40.387 real 0m18.269s 00:28:40.387 user 0m37.705s 00:28:40.387 sys 0m4.944s 00:28:40.387 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:40.387 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.387 ************************************ 00:28:40.387 END TEST nvmf_digest_clean 00:28:40.387 ************************************ 00:28:40.387 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:40.387 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:40.387 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:40.387 11:36:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:40.387 ************************************ 00:28:40.387 START TEST nvmf_digest_error 00:28:40.387 ************************************ 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2227386 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2227386 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2227386 ']' 00:28:40.387 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.388 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:40.388 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.388 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:40.388 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.645 [2024-07-26 11:36:36.095690] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:40.645 [2024-07-26 11:36:36.095800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.645 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.645 [2024-07-26 11:36:36.177706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.645 [2024-07-26 11:36:36.302294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.645 [2024-07-26 11:36:36.302350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.645 [2024-07-26 11:36:36.302367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.645 [2024-07-26 11:36:36.302381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.645 [2024-07-26 11:36:36.302393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.645 [2024-07-26 11:36:36.302422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.903 [2024-07-26 11:36:36.387056] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.903 null0 00:28:40.903 [2024-07-26 11:36:36.511730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.903 [2024-07-26 11:36:36.535993] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2227522 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2227522 /var/tmp/bperf.sock 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2227522 ']' 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:40.903 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.161 [2024-07-26 11:36:36.586389] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:41.161 [2024-07-26 11:36:36.586472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227522 ] 00:28:41.161 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.161 [2024-07-26 11:36:36.653162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.161 [2024-07-26 11:36:36.774602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.419 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.419 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:41.419 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.419 11:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:41.676 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:41.676 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.676 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.933 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.933 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.933 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.191 nvme0n1 00:28:42.191 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:42.191 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.191 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.191 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.191 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:42.191 11:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.460 Running I/O for 2 seconds... 00:28:42.460 [2024-07-26 11:36:37.894677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.894725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.911264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.911301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.911322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.925210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.925244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.925263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.936952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.936986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.937006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.952403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.952448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.952469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.965677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.965719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.965740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.977968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.978002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.978020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:37.993852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:37.993888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:37.993908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.006392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.006426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.006455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.021124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.021157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.021177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.035637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.035671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.035691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.052593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.052627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.052646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.064731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.064764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.064784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.079037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.079072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.079097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.093230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.093263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.093282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.105066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.105099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.105119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.460 [2024-07-26 11:36:38.118918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.460 [2024-07-26 11:36:38.118951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.460 [2024-07-26 11:36:38.118970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.718 [2024-07-26 11:36:38.137107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.718 [2024-07-26 11:36:38.137142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-07-26 11:36:38.137161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.718 [2024-07-26 11:36:38.152822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.718 [2024-07-26 11:36:38.152857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-07-26 11:36:38.152876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.718 [2024-07-26 11:36:38.164976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.718 [2024-07-26 11:36:38.165010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-07-26 11:36:38.165028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.718 [2024-07-26 11:36:38.178764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.718 [2024-07-26 11:36:38.178798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-07-26 11:36:38.178817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.718 [2024-07-26 11:36:38.193648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.718 [2024-07-26 11:36:38.193681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-07-26 11:36:38.193699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.718 [2024-07-26 11:36:38.208113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.718 [2024-07-26 11:36:38.208154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-07-26 11:36:38.208175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.220703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.220756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.235503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.235537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.235556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.248712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.248745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.248764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.261911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.261944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.261962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.275663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.275698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.275716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.289276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.289309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.289328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.301807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.301841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.301859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.317161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.317194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.317212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.330876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.330910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.330929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.344590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.344622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.344641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.358133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.358168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.358186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.719 [2024-07-26 11:36:38.373900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.719 [2024-07-26 11:36:38.373933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-07-26 11:36:38.373951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.387035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.387069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.387088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.400328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.400361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.400379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.416571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.416604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.416623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.430918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.430951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.430970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.443176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.443210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.443236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.457227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.457261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.457280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.470720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.470753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.470773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.484466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.484499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.484518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.498581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.498616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.977 [2024-07-26 11:36:38.498634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.977 [2024-07-26 11:36:38.510343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.977 [2024-07-26 11:36:38.510378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.510396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.523670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.523716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.523735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.538244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.538278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.538304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.552943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.552986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.553005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.566393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.566439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.566462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.579326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.579360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.579379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.593356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.593390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.593409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.606301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.606335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.606354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.621563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.621596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.621615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.978 [2024-07-26 11:36:38.635654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:42.978 [2024-07-26 11:36:38.635688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.978 [2024-07-26 11:36:38.635707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.649569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.649603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.649621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.665492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.665526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.665545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.677016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.677050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.677070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.692975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.693009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.693028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.705154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.705187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.705206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.721290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.721324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.721342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.734218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.734251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.734270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.747226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.747259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.747278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.760795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.760828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.236 [2024-07-26 11:36:38.760847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.236 [2024-07-26 11:36:38.775425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.236 [2024-07-26 11:36:38.775465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.775483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.788434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.788467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.788486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.802366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.802424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.815103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.815136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.815154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.828626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.828659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.828678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.841685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.841718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.841744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.857961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.857995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.858014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.869981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.870014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.870032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.237 [2024-07-26 11:36:38.884620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.237 [2024-07-26 11:36:38.884652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.237 [2024-07-26 11:36:38.884671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.900795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.900828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.900847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.913150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.913184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.913203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.927612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.927645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.927664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.939757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.939790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.956006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.956040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.956060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.970340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.970374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.970393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.983404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.983444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.983465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:38.996081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:38.996114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:38.996133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.495 [2024-07-26 11:36:39.010770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.495 [2024-07-26 11:36:39.010803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.495 [2024-07-26 11:36:39.010822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.023642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.023676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.023695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.036543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.036576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.036601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.052678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.052714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.052732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.066899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.066933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.066952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.079426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.079478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.079496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.092630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.092663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.092688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.108549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.108583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.108603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.122972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.123006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.123028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.134974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.135008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.135027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.496 [2024-07-26 11:36:39.150309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.496 [2024-07-26 11:36:39.150342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.496 [2024-07-26 11:36:39.150362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.163801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.163844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.178226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.178260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.178278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.189215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.189248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.189267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.204545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.204580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.204598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.218107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.218141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.218160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.233719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.233753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.233772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.246622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.246655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.246675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.259885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.259919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.259938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.274279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.274313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.274332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.286886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.286920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.286938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.302597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.302631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.302650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.316657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.316690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.316708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.328962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.328996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.329014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.343387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.343421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.343449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.356278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.356310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.356336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.370292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.370325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.370344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.383758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.383792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.383811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.399212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.399245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.755 [2024-07-26 11:36:39.411636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:43.755 [2024-07-26 11:36:39.411669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.755 [2024-07-26 11:36:39.411687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.429102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.429135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.429153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.444017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.444050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.444069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.455880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.455913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.455931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.472144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.472178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.472197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.485898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.485931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.485949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.498607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.498641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.498660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.513444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.513476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.513495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.526404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.526444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.526464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.540674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.553116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.553149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.553168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.566848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.566882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.566900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.582413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.582460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.582480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.593539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.593572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.593590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.607329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.607363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.014 [2024-07-26 11:36:39.607381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.014 [2024-07-26 11:36:39.623074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.014 [2024-07-26 11:36:39.623108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.015 [2024-07-26 11:36:39.623126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.015 [2024-07-26 11:36:39.636085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.015 [2024-07-26 11:36:39.636119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.015 [2024-07-26 11:36:39.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.015 [2024-07-26 11:36:39.650811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.015 [2024-07-26 11:36:39.650845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.015 [2024-07-26 11:36:39.650863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.015 [2024-07-26 11:36:39.663027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.015 [2024-07-26 11:36:39.663060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.015 [2024-07-26 11:36:39.663085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.677154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.677189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.677208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.693120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.693156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.693176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.708346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.708381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.708400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.721809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.721843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.721862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.733814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.733848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.733867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.748271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.748312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.748330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.762796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.762838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.762858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.773776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.773811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.773829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.788674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.788708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.788726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.802985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.803019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.803038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.273 [2024-07-26 11:36:39.815236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.273 [2024-07-26 11:36:39.815270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.273 [2024-07-26 11:36:39.815289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.274 [2024-07-26 11:36:39.830280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.274 [2024-07-26 11:36:39.830314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.274 [2024-07-26 11:36:39.830334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.274 [2024-07-26 11:36:39.843800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.274 [2024-07-26 11:36:39.843833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.274 [2024-07-26 11:36:39.843852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.274 [2024-07-26 11:36:39.858789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.274 [2024-07-26 11:36:39.858822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.274 [2024-07-26 11:36:39.858841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.274 [2024-07-26 11:36:39.871129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x228c2f0) 00:28:44.274 [2024-07-26 11:36:39.871162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.274 [2024-07-26 11:36:39.871181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.274 00:28:44.274 Latency(us) 00:28:44.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.274 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:44.274 nvme0n1 : 2.00 18265.76 71.35 0.00 0.00 6999.06 3762.25 20388.98 00:28:44.274 =================================================================================================================== 00:28:44.274 Total : 18265.76 71.35 0.00 0.00 6999.06 3762.25 20388.98 00:28:44.274 0 00:28:44.274 11:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:44.274 11:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:44.274 11:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:44.274 | .driver_specific 00:28:44.274 | .nvme_error 00:28:44.274 | .status_code 00:28:44.274 | .command_transient_transport_error' 00:28:44.274 11:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2227522 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2227522 ']' 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2227522 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2227522 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2227522' 00:28:44.839 killing process with pid 2227522 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2227522 00:28:44.839 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.839 00:28:44.839 Latency(us) 00:28:44.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.839 =================================================================================================================== 00:28:44.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.839 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2227522 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2227946 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2227946 /var/tmp/bperf.sock 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2227946 ']' 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:45.406 11:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.406 [2024-07-26 11:36:40.818938] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:45.406 [2024-07-26 11:36:40.819041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227946 ] 00:28:45.406 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.406 Zero copy mechanism will not be used. 00:28:45.406 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.406 [2024-07-26 11:36:40.893495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.406 [2024-07-26 11:36:41.013819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.664 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.664 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:45.664 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.664 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:45.922 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:45.922 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.922 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.922 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.922 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.922 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.180 nvme0n1 00:28:46.180 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:46.180 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.180 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.180 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.180 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:46.180 11:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:46.438 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.438 Zero copy mechanism will not be used. 00:28:46.438 Running I/O for 2 seconds... 00:28:46.438 [2024-07-26 11:36:41.929862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.929915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.929937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.940162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.940202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.940223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.950200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.950234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.950254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.959641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.959677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.959697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.969456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.969497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.969518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.979034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.979068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.979089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.988387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.988422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.988450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:41.997786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:41.997820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:41.997839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.007212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.007252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.007274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.016607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.016641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.016660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.025899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.025933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.025952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.035258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.035291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.035310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.044617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.044651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.044670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.053945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.053978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.053997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.063367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.063400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.063418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.072741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.072774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.072793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.082086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.082119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.082137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.438 [2024-07-26 11:36:42.091660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.438 [2024-07-26 11:36:42.091693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.438 [2024-07-26 11:36:42.091712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.100976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.101009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.101028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.110317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.110349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.110367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.119701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.119734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.119752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.129261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.129294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.129313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.138752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.138784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.138803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.148213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.148245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.148264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.158370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.158404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.167808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.167841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.167871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.177320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.177353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.177372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.186705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.186738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.186757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.196132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.196164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.205858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.205890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.205909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.215496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.215529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.215547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.225128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.225161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.225179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.234857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.234900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.234920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.245572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.245604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.245623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.256095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.256133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.256153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.266593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.266625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.266645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.276799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.276830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.276849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.286926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.286960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.286979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.296535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.296567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.296586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.306878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.697 [2024-07-26 11:36:42.306911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.697 [2024-07-26 11:36:42.306930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.697 [2024-07-26 11:36:42.317234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.698 [2024-07-26 11:36:42.317266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.698 [2024-07-26 11:36:42.317285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.698 [2024-07-26 11:36:42.327699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.698 [2024-07-26 11:36:42.327732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.698 [2024-07-26 11:36:42.327750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.698 [2024-07-26 11:36:42.338360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.698 [2024-07-26 11:36:42.338399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.698 [2024-07-26 11:36:42.338423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.698 [2024-07-26 11:36:42.348968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.698 [2024-07-26 11:36:42.349001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.698 [2024-07-26 11:36:42.349019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.359498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.359531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.359549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.369948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.369981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.369999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.380330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.380363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.380381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.390710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.390742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.390761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.401226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.401260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.401279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.411662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.411695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.411714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.421860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.421892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.421910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.432448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.432495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.432515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.442982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.443043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.453165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.453199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.453218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.463507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.463540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.463559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.473591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.473623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.473642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.483963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.483996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.484016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.956 [2024-07-26 11:36:42.494491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.956 [2024-07-26 11:36:42.494523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.956 [2024-07-26 11:36:42.494542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.504865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.504898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.504917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.515392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.515424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.515451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.525795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.525846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.536261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.536293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.536312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.546901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.546932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.546951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.557506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.557539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.557558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.568137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.568170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.568189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.578761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.578794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.578813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.589296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.589329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.589348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.599858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.599890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.599909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.957 [2024-07-26 11:36:42.610260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:46.957 [2024-07-26 11:36:42.610291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.957 [2024-07-26 11:36:42.610316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.215 [2024-07-26 11:36:42.620835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.215 [2024-07-26 11:36:42.620868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.215 [2024-07-26 11:36:42.620886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.215 [2024-07-26 11:36:42.631461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.215 [2024-07-26 11:36:42.631493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.215 [2024-07-26 11:36:42.631512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.215 [2024-07-26 11:36:42.641820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.215 [2024-07-26 11:36:42.641860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.215 [2024-07-26 11:36:42.641879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.215 [2024-07-26 11:36:42.652475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.215 [2024-07-26 11:36:42.652508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.215 [2024-07-26 11:36:42.652527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.215 [2024-07-26 11:36:42.663173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.215 [2024-07-26 11:36:42.663214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.215 [2024-07-26 11:36:42.663232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.215 [2024-07-26 11:36:42.673928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.215 [2024-07-26 11:36:42.673961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.215 [2024-07-26 11:36:42.673980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.684335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.684367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.684385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.694981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.695014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.695032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.705478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.705511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.705529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.715834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.715866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.715884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.726360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.726393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.726412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.736772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.736805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.736824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.747370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.747402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.747420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.757856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.757890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.757908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.768697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.768730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.768749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.779328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.779360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.779379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.789956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.789989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.790014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.800440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.800472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.800491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.811062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.811097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.811115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.821489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.821521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.821539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.831903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.831935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.831955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.842666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.842698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.842717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.853340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.853373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.863724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.863757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.863776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.216 [2024-07-26 11:36:42.874398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.216 [2024-07-26 11:36:42.874451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.216 [2024-07-26 11:36:42.874473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.885242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.885300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.895846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.895878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.895897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.906504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.906540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.906559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.917044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.917075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.917094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.927525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.927557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.927575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.938132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.938164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.938183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.948537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.948570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.948589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.958978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.959010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.959029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.969505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.969538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.969557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.980172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.980205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.980225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:42.990869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:42.990904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:42.990923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:43.001570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:43.001605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:43.001625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:43.012360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:43.012393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:43.012411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:43.023395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:43.023436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:43.023457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:43.033967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:43.033999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:43.034018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:43.044485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:43.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.475 [2024-07-26 11:36:43.044537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.475 [2024-07-26 11:36:43.054908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.475 [2024-07-26 11:36:43.054941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.054960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.065396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.065438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.065465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.076086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.076119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.076137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.086718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.086749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.086768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.097185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.097218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.097236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.107580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.107612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.107631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.118164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.118198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.118217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.476 [2024-07-26 11:36:43.128637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.476 [2024-07-26 11:36:43.128670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.476 [2024-07-26 11:36:43.128689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.139063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.139096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.139115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.149535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.149567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.149586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.160002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.160035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.160054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.170500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.170532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.170551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.180994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.181027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.181046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.191574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.191606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.191625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.203745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.203777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.203795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.215634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.215668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.215687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.227349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.227382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.227400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.238689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.238732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.238751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.250468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.250511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.250545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.260919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.260953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.260972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.270981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.271014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.271032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.281175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.281209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.735 [2024-07-26 11:36:43.281228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.735 [2024-07-26 11:36:43.291473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.735 [2024-07-26 11:36:43.291506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.291526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.301083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.301116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.301135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.311480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.311514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.311533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.321451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.321484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.321503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.331665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.331699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.331718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.341485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.341525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.341545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.350984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.351018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.351036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.360759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.360791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.360810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.370424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.370465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.370484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.380189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.380222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.380240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.736 [2024-07-26 11:36:43.389854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.736 [2024-07-26 11:36:43.389887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.736 [2024-07-26 11:36:43.389905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.399403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.399443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.399464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.408866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.408899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.408917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.418418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.418456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.418475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.428497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.428550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.438408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.438448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.438468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.448637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.448671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.448690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.458730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.458762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.458781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.468298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.468330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.468349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.477947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.477979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.477998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.487692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.487725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.487743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.497317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.497350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.497370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.506897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.506930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.506958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.516509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.516561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.526074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.526106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.526124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.536162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.536197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.536216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.545743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.545775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.545794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.555385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.555417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.555446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.565203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.565236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.565254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.574915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.574946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.574965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.584677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.584709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.584728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.594187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.594220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.594239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.603877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.603910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.603928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.613404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.613444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.613464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.622931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.622963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.622982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.632835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.632868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.632887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.642383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.642416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.642443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.994 [2024-07-26 11:36:43.652066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:47.994 [2024-07-26 11:36:43.652099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.994 [2024-07-26 11:36:43.652117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.661853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.661888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.661907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.671461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.671492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.671517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.681131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.681163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.681182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.690879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.690912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.690930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.700513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.700545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.700564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.710047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.710080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.710098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.719559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.719592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.719610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.729007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.729040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.729058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.738621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.738654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.738673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.748170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.748205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.748223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.758206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.758246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.758266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.768464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.768498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.768517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.778030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.778062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.778081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.787718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.787753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.787771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.797312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.797345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.797364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.807016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.807050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.816548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.816583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.816602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.826059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.826092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.826112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.835668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.835701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.835721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.845249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.845281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.845300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.854881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.854915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.865044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.865078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.865097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.874774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.874807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.874825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.884478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.256 [2024-07-26 11:36:43.884512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.256 [2024-07-26 11:36:43.884530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.256 [2024-07-26 11:36:43.894255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.257 [2024-07-26 11:36:43.894287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.257 [2024-07-26 11:36:43.894306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.257 [2024-07-26 11:36:43.903954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.257 [2024-07-26 11:36:43.903986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.257 [2024-07-26 11:36:43.904005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.257 [2024-07-26 11:36:43.913554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1433ec0) 00:28:48.257 [2024-07-26 11:36:43.913586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.257 [2024-07-26 11:36:43.913604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.514 00:28:48.514 Latency(us) 00:28:48.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:48.514 nvme0n1 : 2.00 3060.38 382.55 0.00 0.00 5223.26 4514.70 12524.66 00:28:48.514 =================================================================================================================== 00:28:48.514 Total : 3060.38 382.55 0.00 0.00 5223.26 4514.70 12524.66 00:28:48.514 0 00:28:48.514 11:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:48.514 11:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:48.514 11:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:48.514 11:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:48.514 | .driver_specific 00:28:48.514 | .nvme_error 00:28:48.514 | .status_code 00:28:48.514 | .command_transient_transport_error' 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2227946 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2227946 ']' 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2227946 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2227946 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2227946' 00:28:48.773 killing process with pid 2227946 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2227946 00:28:48.773 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.773 00:28:48.773 Latency(us) 00:28:48.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.773 =================================================================================================================== 00:28:48.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.773 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2227946 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2228466 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:49.031 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2228466 /var/tmp/bperf.sock 00:28:49.032 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2228466 ']' 00:28:49.032 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.032 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.032 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.032 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.032 11:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.032 [2024-07-26 11:36:44.646847] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:49.032 [2024-07-26 11:36:44.647025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228466 ] 00:28:49.290 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.290 [2024-07-26 11:36:44.757118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.290 [2024-07-26 11:36:44.878914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.547 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.547 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:49.547 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.547 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.805 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:49.805 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.805 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.805 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.805 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.805 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.370 nvme0n1 00:28:50.370 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:50.370 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.370 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.370 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:50.370 11:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.370 Running I/O for 2 seconds... 00:28:50.370 [2024-07-26 11:36:46.021391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.370 [2024-07-26 11:36:46.021688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.370 [2024-07-26 11:36:46.021738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.035553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.035822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.035855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.049657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.049930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.049962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.063736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.064003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.064037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.077808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.078074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.078106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.091845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.092109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.092141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.105860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.106125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.106156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.120038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.120332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.134067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.134330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.134360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.148021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.148284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.148320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.162029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.162290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.162321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.176028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.176288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.629 [2024-07-26 11:36:46.176319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.629 [2024-07-26 11:36:46.189998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.629 [2024-07-26 11:36:46.190259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.190290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.203963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.630 [2024-07-26 11:36:46.204227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.204258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.218273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.630 [2024-07-26 11:36:46.218626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.218656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.232501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.630 [2024-07-26 11:36:46.232791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.232821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.246830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.630 [2024-07-26 11:36:46.247143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.247173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.261112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.630 [2024-07-26 11:36:46.261437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.261468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.275573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.630 [2024-07-26 11:36:46.275886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.630 [2024-07-26 11:36:46.275917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.630 [2024-07-26 11:36:46.289937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.888 [2024-07-26 11:36:46.290250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.888 [2024-07-26 11:36:46.290280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.888 [2024-07-26 11:36:46.304168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.888 [2024-07-26 11:36:46.304487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.888 [2024-07-26 11:36:46.304518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.888 [2024-07-26 11:36:46.318583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.888 [2024-07-26 11:36:46.318895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.888 [2024-07-26 11:36:46.318926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.888 [2024-07-26 11:36:46.332751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.888 [2024-07-26 11:36:46.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.888 [2024-07-26 11:36:46.333096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.888 [2024-07-26 11:36:46.347087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.888 [2024-07-26 11:36:46.347397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.888 [2024-07-26 11:36:46.347435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.888 [2024-07-26 11:36:46.361391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.888 [2024-07-26 11:36:46.361736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.888 [2024-07-26 11:36:46.361767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.888 [2024-07-26 11:36:46.375726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.376003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.376033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.389949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.390260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.390290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.404300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.404625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.404655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.418612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.418928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.418958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.432937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.433211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.433241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.447269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.447588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.447619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.461608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.461884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.461914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.475839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.476153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.476183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.490274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.490611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.490641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.504591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.504939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.504969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.518958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.519287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.533347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.533667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.533697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:50.889 [2024-07-26 11:36:46.547570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:50.889 [2024-07-26 11:36:46.547887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.889 [2024-07-26 11:36:46.547917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.561876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.562202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.562233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.576038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.576322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.576353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.590119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.590400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.590438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.604245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.604493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.604523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.618360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.618615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.618647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.632480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.632767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.632797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.646558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.646841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.646871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.660718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.660961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.660991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.674742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.675055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.675086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.688856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.689167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.689197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.702944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.703256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.703286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.717077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.717380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.717410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.731221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.731550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.745448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.745726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.745755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.759691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.760014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.760044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.773945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.774268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.774298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.788127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.788486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.148 [2024-07-26 11:36:46.802399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.148 [2024-07-26 11:36:46.802688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.148 [2024-07-26 11:36:46.802719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.816720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.817035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.817065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.831077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.831401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.831438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.845369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.845701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.845731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.859694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.860028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.860058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.873979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.874305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.874335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.888171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.888486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.888522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.902513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.902824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.902854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.916876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.917199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.917229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.931235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.931555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.931585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.945526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.945878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.959923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.960247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.960277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.974246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.974555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.974585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:46.988571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:46.988882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:46.988911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:47.002857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:47.003127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:47.003157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:47.017312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:47.017652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:47.017683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:47.031622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:47.031945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:47.031975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:47.046053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:47.046358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:47.046388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.407 [2024-07-26 11:36:47.060404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.407 [2024-07-26 11:36:47.060732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.407 [2024-07-26 11:36:47.060764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.074695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.075000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.075031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.088956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.089278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.089308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.103263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.103582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.103613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.117593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.117868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.117897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.131942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.132252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.132282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.146148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.146418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.160384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.160713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.160744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.174703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.175030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.175060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.188983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.189288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.666 [2024-07-26 11:36:47.189318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.666 [2024-07-26 11:36:47.203262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.666 [2024-07-26 11:36:47.203579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.203610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.217462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.217727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.217757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.231642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.231928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.231958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.245703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.245981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.246011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.259753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.260031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.260067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.273752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.274036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.274067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.287997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.288315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.288345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.302419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.302710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.302740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.667 [2024-07-26 11:36:47.316768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.667 [2024-07-26 11:36:47.317091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.667 [2024-07-26 11:36:47.317121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.925 [2024-07-26 11:36:47.331061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.925 [2024-07-26 11:36:47.331385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.925 [2024-07-26 11:36:47.331416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.345513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.345836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.345867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.359937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.360254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.360285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.374346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.374674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.374705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.388682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.389060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.389091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.403148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.403451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.403481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.417512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.417842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.417872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.431891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.432222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.432253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.446357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.446706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.460889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.461193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.461223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.475298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.475640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.475671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.489740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.490053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.490083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.504079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.504399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.504437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.518540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.518856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.518886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.532877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.533187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.533217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.547305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.547623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.547653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.561628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.561950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.561979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:51.926 [2024-07-26 11:36:47.576125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:51.926 [2024-07-26 11:36:47.576646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.926 [2024-07-26 11:36:47.576678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.590527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.590798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.590829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.604936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.605250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.605280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.619199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.619521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.619551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.633670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.633985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.634015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.648106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.648436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.648467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.662330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.662613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.662643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.676714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.677032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.677062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.691261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.691582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.691613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.705597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.705873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.705903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.720073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.720422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.734328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.734578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.734608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.748455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.748702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.748731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.762712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.763032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.763072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.777002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.777250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.777281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.791262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.791520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.791551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.805478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.805725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.805756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.819635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.819947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.185 [2024-07-26 11:36:47.819977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.185 [2024-07-26 11:36:47.833708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.185 [2024-07-26 11:36:47.833940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.186 [2024-07-26 11:36:47.833971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.847840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.848124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.848154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.862118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.862451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.862482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.876378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.876626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.876658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.890573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.890858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.890888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.904917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.905238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.905269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.919202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.919520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.933484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.933817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.933847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.947848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.948184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.948213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.962181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.962502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.962532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.976538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.976854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.976884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:47.990888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:47.991214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:47.991243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 [2024-07-26 11:36:48.005288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13827e0) with pdu=0x2000190fe2e8 00:28:52.444 [2024-07-26 11:36:48.005612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.444 [2024-07-26 11:36:48.005642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:52.444 00:28:52.444 Latency(us) 00:28:52.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.444 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.444 nvme0n1 : 2.01 17875.67 69.83 0.00 0.00 7143.19 4708.88 14563.56 00:28:52.444 =================================================================================================================== 00:28:52.444 Total : 17875.67 69.83 0.00 0.00 7143.19 4708.88 14563.56 00:28:52.444 0 00:28:52.444 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:52.444 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:52.444 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:52.444 | .driver_specific 00:28:52.444 | .nvme_error 00:28:52.444 | .status_code 00:28:52.444 | .command_transient_transport_error' 00:28:52.444 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2228466 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2228466 ']' 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2228466 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:52.703 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2228466 00:28:52.960 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:52.960 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:52.960 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2228466' 00:28:52.960 killing process with pid 2228466 00:28:52.960 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2228466 00:28:52.960 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.960 00:28:52.960 Latency(us) 00:28:52.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.960 =================================================================================================================== 00:28:52.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.960 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2228466 00:28:53.218 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2228881 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2228881 /var/tmp/bperf.sock 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2228881 ']' 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:53.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.219 11:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.219 [2024-07-26 11:36:48.745342] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:28:53.219 [2024-07-26 11:36:48.745533] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228881 ] 00:28:53.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.219 Zero copy mechanism will not be used. 00:28:53.219 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.219 [2024-07-26 11:36:48.852611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.477 [2024-07-26 11:36:48.974867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.477 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:53.477 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:53.477 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.477 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:54.042 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:54.042 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.042 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.042 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.042 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.042 11:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.607 nvme0n1 00:28:54.607 11:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:54.607 11:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.607 11:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.607 11:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.608 11:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:54.608 11:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.608 Zero copy mechanism will not be used. 00:28:54.608 Running I/O for 2 seconds... 00:28:54.608 [2024-07-26 11:36:50.184840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.185216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.185259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.195810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.196168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.196202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.205749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.206106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.206138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.216338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.216748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.216783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.226705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.227082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.227115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.236699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.237055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.237087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.246271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.246636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.246669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.255341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.255731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.255764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.608 [2024-07-26 11:36:50.264626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.608 [2024-07-26 11:36:50.265038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.608 [2024-07-26 11:36:50.265070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.273833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.274188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.283384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.283800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.283833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.293126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.293508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.293542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.302058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.302413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.302454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.310930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.311285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.311316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.320067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.320420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.320461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.329667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.329922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.329955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.341273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.341752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.341784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.353205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.353512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.353545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.366088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.366530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.366563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.867 [2024-07-26 11:36:50.377606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.867 [2024-07-26 11:36:50.378007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.867 [2024-07-26 11:36:50.378039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.390326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.390757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.390790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.402454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.402875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.402907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.414170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.414590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.414622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.425760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.426184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.426215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.437534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.437920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.437952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.450018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.450373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.450416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.461073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.461463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.461495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.471100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.471531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.471563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.481915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.482328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.482359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.492592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.492999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.493032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.504543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.505017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.505049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.516677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.517073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.868 [2024-07-26 11:36:50.526798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:54.868 [2024-07-26 11:36:50.527172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.868 [2024-07-26 11:36:50.527203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.537041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.537423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.537462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.548511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.549018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.549050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.560404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.560865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.560898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.572056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.572445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.572478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.582322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.582758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.582790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.593689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.594155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.594187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.604888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.605361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.605393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.615585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.616055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.616086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.626361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.626769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.626801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.637594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.637949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.637980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.648928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.649360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.649392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.661198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.661639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.661671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.671567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.671978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.672009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.682592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.683065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.683096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.693496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.693907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.693939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.703902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.704306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.704338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.714930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.715354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.715390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.726528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.726948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.726979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.127 [2024-07-26 11:36:50.738102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.127 [2024-07-26 11:36:50.738536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.127 [2024-07-26 11:36:50.738577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.128 [2024-07-26 11:36:50.751485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.128 [2024-07-26 11:36:50.751891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.128 [2024-07-26 11:36:50.751923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.128 [2024-07-26 11:36:50.763870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.128 [2024-07-26 11:36:50.764120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.128 [2024-07-26 11:36:50.764152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.128 [2024-07-26 11:36:50.776688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.128 [2024-07-26 11:36:50.777147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.128 [2024-07-26 11:36:50.777179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.386 [2024-07-26 11:36:50.789073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.386 [2024-07-26 11:36:50.789455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.386 [2024-07-26 11:36:50.789488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.386 [2024-07-26 11:36:50.800658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.386 [2024-07-26 11:36:50.801049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.386 [2024-07-26 11:36:50.801081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.386 [2024-07-26 11:36:50.812882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.386 [2024-07-26 11:36:50.813220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.813252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.825500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.825915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.825950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.835757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.836101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.836139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.846816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.847542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.847574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.857926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.858275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.858307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.868131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.868591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.868623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.878123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.878557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.878588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.889637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.890120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.890152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.901738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.902173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.902204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.912884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.913324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.913355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.923763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.924131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.924162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.936572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.936981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.937023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.947461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.947887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.947918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.958453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.958917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.958948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.969289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.969637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.969669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.980267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.980676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:50.991350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:50.991783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:50.991814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:51.001978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:51.002560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:51.002593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:51.014653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:51.015108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:51.015140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:51.026319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:51.026647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:51.026678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.387 [2024-07-26 11:36:51.037972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.387 [2024-07-26 11:36:51.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.387 [2024-07-26 11:36:51.038458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.048940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.049483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.049515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.061780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.062212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.062244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.073234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.073644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.073676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.085547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.086001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.086032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.097989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.098546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.110160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.110752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.110784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.122605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.122942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.122974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.134811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.135196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.135227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.146152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.146486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.159072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.159443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.646 [2024-07-26 11:36:51.159475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.646 [2024-07-26 11:36:51.171197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.646 [2024-07-26 11:36:51.171527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.171559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.182133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.182480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.182512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.192643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.193008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.193044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.202754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.203162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.203194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.212657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.213058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.213089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.222668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.223054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.223087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.232512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.232860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.232901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.242096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.242521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.242553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.251653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.252006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.252037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.261361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.261734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.261773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.271705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.272135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.272166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.281060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.281542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.281574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.291786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.292155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.292187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.647 [2024-07-26 11:36:51.301839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.647 [2024-07-26 11:36:51.302253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.647 [2024-07-26 11:36:51.302285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.311186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.311541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.311573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.320960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.321354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.321386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.331044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.331407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.331446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.340139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.340471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.340503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.349353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.349710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.349742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.358524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.358887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.358918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.368761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.369128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.369161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.378172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.378575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.378606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.389970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.390387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.906 [2024-07-26 11:36:51.390418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.906 [2024-07-26 11:36:51.403229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.906 [2024-07-26 11:36:51.403636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.403667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.414552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.414885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.414925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.425971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.426346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.426377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.438308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.438731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.438763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.449836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.450275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.450307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.461019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.461486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.461518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.472769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.473220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.473252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.484359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.484716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.484749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.496721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.497165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.497197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.509608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.509976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.510018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.520114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.520550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.530106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.530632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.530663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.541724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.542312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.542344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.553588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.554088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.554119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.907 [2024-07-26 11:36:51.565420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:55.907 [2024-07-26 11:36:51.565922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.907 [2024-07-26 11:36:51.565954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.577684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.578042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.578073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.588918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.589286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.589318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.600514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.600866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.612026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.612456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.612491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.623868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.624325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.624357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.635491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.635894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.635928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.646602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.647050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.647082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.657511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.657901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.657940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.668290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.668623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.668655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.679985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.680308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.680339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.691583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.691945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.691977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.703870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.704314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.704346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.715112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.715548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.727184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.727647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.727679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.740026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.740541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.740574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.751730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.752189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.752220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.763548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.763976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.764008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.775797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.776290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.776322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.787454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.166 [2024-07-26 11:36:51.787775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.166 [2024-07-26 11:36:51.787806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.166 [2024-07-26 11:36:51.799163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.167 [2024-07-26 11:36:51.799576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.167 [2024-07-26 11:36:51.799609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.167 [2024-07-26 11:36:51.811821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.167 [2024-07-26 11:36:51.812274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.167 [2024-07-26 11:36:51.812317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.167 [2024-07-26 11:36:51.822011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.167 [2024-07-26 11:36:51.822357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.167 [2024-07-26 11:36:51.822389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.425 [2024-07-26 11:36:51.831696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.425 [2024-07-26 11:36:51.832067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.425 [2024-07-26 11:36:51.832099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.841735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.842220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.842253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.852611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.853126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.853159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.863074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.863407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.863445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.872841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.873199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.873230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.882523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.882926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.882958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.893093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.893426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.893465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.902905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.903269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.903300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.914833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.915206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.915238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.926478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.927077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.927110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.937461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.937945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.937976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.949096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.949473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.949506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.961220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.961676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.961709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.972139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.972568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.972601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.980907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.981251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.988776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.989094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.989135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:51.997358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:51.997679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:51.997712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.005762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.006082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.006115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.015113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.015521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.015554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.024985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.025308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.025343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.034769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.035140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.046729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.047096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.047128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.056326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.056648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.065086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.065406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.065445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.073637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.073970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.074002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.426 [2024-07-26 11:36:52.081355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.426 [2024-07-26 11:36:52.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.426 [2024-07-26 11:36:52.081709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.090561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.090985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.091015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.100868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.101189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.101221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.110672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.110997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.111029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.122370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.122721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.122752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.132775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.133098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.133130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.143743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.144191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.144223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.153981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.154305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.154336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.162967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.163287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.163319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.685 [2024-07-26 11:36:52.172593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1382b20) with pdu=0x2000190fef90 00:28:56.685 [2024-07-26 11:36:52.172974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.685 [2024-07-26 11:36:52.173011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.685 00:28:56.685 Latency(us) 00:28:56.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:56.685 nvme0n1 : 2.01 2841.65 355.21 0.00 0.00 5616.51 2390.85 13495.56 00:28:56.685 =================================================================================================================== 00:28:56.685 Total : 2841.65 355.21 0.00 0.00 5616.51 2390.85 13495.56 00:28:56.685 0 00:28:56.685 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:56.685 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:56.685 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:56.685 | .driver_specific 00:28:56.685 | .nvme_error 00:28:56.685 | .status_code 00:28:56.685 | .command_transient_transport_error' 00:28:56.685 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2228881 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2228881 ']' 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2228881 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2228881 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2228881' 00:28:57.251 killing process with pid 2228881 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2228881 00:28:57.251 Received shutdown signal, test time was about 2.000000 seconds 00:28:57.251 00:28:57.251 Latency(us) 00:28:57.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.251 =================================================================================================================== 00:28:57.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.251 11:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2228881 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2227386 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2227386 ']' 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2227386 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2227386 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2227386' 00:28:57.509 killing process with pid 2227386 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2227386 00:28:57.509 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2227386 00:28:57.768 00:28:57.768 real 0m17.318s 00:28:57.768 user 0m35.402s 00:28:57.768 sys 0m4.990s 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 ************************************ 00:28:57.768 END TEST nvmf_digest_error 00:28:57.768 ************************************ 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.768 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.768 rmmod nvme_tcp 00:28:57.768 rmmod nvme_fabrics 00:28:57.768 rmmod nvme_keyring 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2227386 ']' 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2227386 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2227386 ']' 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2227386 00:28:58.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2227386) - No such process 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2227386 is not found' 00:28:58.027 Process with pid 2227386 is not found 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.027 11:36:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.931 00:28:59.931 real 0m40.693s 00:28:59.931 user 1m14.044s 00:28:59.931 sys 0m12.115s 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:59.931 ************************************ 00:28:59.931 END TEST nvmf_digest 00:28:59.931 ************************************ 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:59.931 11:36:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.931 ************************************ 00:28:59.931 START TEST nvmf_bdevperf 00:28:59.931 ************************************ 00:28:59.932 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:00.191 * Looking for test storage... 00:29:00.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.191 11:36:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:02.765 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:02.765 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:02.765 Found net devices under 0000:84:00.0: cvl_0_0 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:02.765 Found net devices under 0000:84:00.1: cvl_0_1 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.765 11:36:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:02.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:29:02.765 00:29:02.765 --- 10.0.0.2 ping statistics --- 00:29:02.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.765 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:29:02.765 00:29:02.765 --- 10.0.0.1 ping statistics --- 00:29:02.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.765 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2231383 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2231383 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2231383 ']' 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.765 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.765 [2024-07-26 11:36:58.187142] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:02.765 [2024-07-26 11:36:58.187251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.765 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.765 [2024-07-26 11:36:58.277359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:02.765 [2024-07-26 11:36:58.417418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.765 [2024-07-26 11:36:58.417503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.765 [2024-07-26 11:36:58.417521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.765 [2024-07-26 11:36:58.417535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.765 [2024-07-26 11:36:58.417547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.765 [2024-07-26 11:36:58.417610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.765 [2024-07-26 11:36:58.417666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.765 [2024-07-26 11:36:58.417669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.025 [2024-07-26 11:36:58.596580] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.025 Malloc0 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.025 [2024-07-26 11:36:58.670261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:03.025 { 00:29:03.025 "params": { 00:29:03.025 "name": "Nvme$subsystem", 00:29:03.025 "trtype": "$TEST_TRANSPORT", 00:29:03.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:03.025 "adrfam": "ipv4", 00:29:03.025 "trsvcid": "$NVMF_PORT", 00:29:03.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:03.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:03.025 "hdgst": ${hdgst:-false}, 00:29:03.025 "ddgst": ${ddgst:-false} 00:29:03.025 }, 00:29:03.025 "method": "bdev_nvme_attach_controller" 00:29:03.025 } 00:29:03.025 EOF 00:29:03.025 )") 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:03.025 11:36:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:03.025 "params": { 00:29:03.025 "name": "Nvme1", 00:29:03.025 "trtype": "tcp", 00:29:03.025 "traddr": "10.0.0.2", 00:29:03.025 "adrfam": "ipv4", 00:29:03.025 "trsvcid": "4420", 00:29:03.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:03.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:03.025 "hdgst": false, 00:29:03.025 "ddgst": false 00:29:03.025 }, 00:29:03.025 "method": "bdev_nvme_attach_controller" 00:29:03.025 }' 00:29:03.283 [2024-07-26 11:36:58.723808] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:03.283 [2024-07-26 11:36:58.723889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231522 ] 00:29:03.283 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.283 [2024-07-26 11:36:58.794411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.283 [2024-07-26 11:36:58.918701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.541 Running I/O for 1 seconds... 00:29:04.913 00:29:04.913 Latency(us) 00:29:04.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.913 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:04.913 Verification LBA range: start 0x0 length 0x4000 00:29:04.913 Nvme1n1 : 1.01 7918.79 30.93 0.00 0.00 16091.02 3519.53 15825.73 00:29:04.913 =================================================================================================================== 00:29:04.913 Total : 7918.79 30.93 0.00 0.00 16091.02 3519.53 15825.73 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2231669 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:04.913 { 00:29:04.913 "params": { 00:29:04.913 "name": "Nvme$subsystem", 00:29:04.913 "trtype": "$TEST_TRANSPORT", 00:29:04.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.913 "adrfam": "ipv4", 00:29:04.913 "trsvcid": "$NVMF_PORT", 00:29:04.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.913 "hdgst": ${hdgst:-false}, 00:29:04.913 "ddgst": ${ddgst:-false} 00:29:04.913 }, 00:29:04.913 "method": "bdev_nvme_attach_controller" 00:29:04.913 } 00:29:04.913 EOF 00:29:04.913 )") 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:04.913 11:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:04.913 "params": { 00:29:04.913 "name": "Nvme1", 00:29:04.913 "trtype": "tcp", 00:29:04.913 "traddr": "10.0.0.2", 00:29:04.913 "adrfam": "ipv4", 00:29:04.913 "trsvcid": "4420", 00:29:04.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.913 "hdgst": false, 00:29:04.913 "ddgst": false 00:29:04.913 }, 00:29:04.913 "method": "bdev_nvme_attach_controller" 00:29:04.913 }' 00:29:04.913 [2024-07-26 11:37:00.523556] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:04.913 [2024-07-26 11:37:00.523654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231669 ] 00:29:05.171 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.171 [2024-07-26 11:37:00.616678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.171 [2024-07-26 11:37:00.739105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.738 Running I/O for 15 seconds... 00:29:08.270 11:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2231383 00:29:08.270 11:37:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:08.270 [2024-07-26 11:37:03.460215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.460988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.461003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.461020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.270 [2024-07-26 11:37:03.461035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.270 [2024-07-26 11:37:03.461053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.461975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.461993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.271 [2024-07-26 11:37:03.462391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.271 [2024-07-26 11:37:03.462406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.462982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.462998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.272 [2024-07-26 11:37:03.463586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.272 [2024-07-26 11:37:03.463619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.272 [2024-07-26 11:37:03.463651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.272 [2024-07-26 11:37:03.463685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.272 [2024-07-26 11:37:03.463720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.272 [2024-07-26 11:37:03.463752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.272 [2024-07-26 11:37:03.463770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.463786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.463804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.463820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.463837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.463853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.463885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.463902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.463919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.463935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.463953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.463969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.463991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.464041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.464074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.464108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.464142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.273 [2024-07-26 11:37:03.464667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.273 [2024-07-26 11:37:03.464709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ab8a0 is same with the state(5) to be set 00:29:08.273 [2024-07-26 11:37:03.464743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:08.273 [2024-07-26 11:37:03.464756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:08.273 [2024-07-26 11:37:03.464769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15032 len:8 PRP1 0x0 PRP2 0x0 00:29:08.273 [2024-07-26 11:37:03.464783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464853] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22ab8a0 was disconnected and freed. reset controller. 00:29:08.273 [2024-07-26 11:37:03.464932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.273 [2024-07-26 11:37:03.464956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.464972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.273 [2024-07-26 11:37:03.464988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.465011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.273 [2024-07-26 11:37:03.465027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.465043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.273 [2024-07-26 11:37:03.465058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.273 [2024-07-26 11:37:03.465072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.273 [2024-07-26 11:37:03.468819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.273 [2024-07-26 11:37:03.468875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.273 [2024-07-26 11:37:03.469667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-07-26 11:37:03.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.273 [2024-07-26 11:37:03.469721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.273 [2024-07-26 11:37:03.469962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.273 [2024-07-26 11:37:03.470206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.273 [2024-07-26 11:37:03.470229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.273 [2024-07-26 11:37:03.470248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.273 [2024-07-26 11:37:03.473851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.273 [2024-07-26 11:37:03.482959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.273 [2024-07-26 11:37:03.483485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-07-26 11:37:03.483518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.273 [2024-07-26 11:37:03.483537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.483778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.484023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.484048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.484064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.487662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.496959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.497577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.497638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.497659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.497905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.498155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.498182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.498198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.501814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.510903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.511519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.511565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.511586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.511832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.512076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.512102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.512119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.515716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.524823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.525280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.525314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.525333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.525585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.525830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.525855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.525871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.529459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.538760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.539302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.539336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.539355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.539610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.539853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.539879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.539895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.543497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.552796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.553403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.553461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.553483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.553730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.553975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.554002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.554018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.557614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.566704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.567240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.567273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.567292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.567546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.567792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.567817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.567833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.571424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.580727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.581351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.581398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.581419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.581682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.581927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.581952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.581969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.274 [2024-07-26 11:37:03.585562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.274 [2024-07-26 11:37:03.594700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.274 [2024-07-26 11:37:03.595314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-07-26 11:37:03.595359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.274 [2024-07-26 11:37:03.595386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.274 [2024-07-26 11:37:03.595661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.274 [2024-07-26 11:37:03.595907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.274 [2024-07-26 11:37:03.595933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.274 [2024-07-26 11:37:03.595950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.599541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.608649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.609151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.609185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.609204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.609460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.609707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.609733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.609749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.613331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.622644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.623145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.623178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.623196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.623449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.623694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.623720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.623736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.627319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.636624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.637112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.637145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.637164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.637403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.637661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.637693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.637710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.641288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.650594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.651085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.651117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.651135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.651374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.651666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.651693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.651709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.655286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.664590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.665208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.665255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.665276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.665539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.665784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.665810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.665827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.669410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.678519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.679021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.679054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.679073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.679314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.679572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.679597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.679614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.683195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.692518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.693136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.693181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.693203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.693465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.693710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.693736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.693753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.697335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.706451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.707080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.707125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.707147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.707393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.707653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.707679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.707695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.711287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.720422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.721041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.721108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.721129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.275 [2024-07-26 11:37:03.721376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.275 [2024-07-26 11:37:03.721632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.275 [2024-07-26 11:37:03.721659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.275 [2024-07-26 11:37:03.721675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.275 [2024-07-26 11:37:03.725253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.275 [2024-07-26 11:37:03.734349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.275 [2024-07-26 11:37:03.734981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-07-26 11:37:03.735028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.275 [2024-07-26 11:37:03.735049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.735302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.735565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.735592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.735609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.739194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.748296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.748806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.748841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.748860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.749100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.749343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.749369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.749385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.752984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.762298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.762792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.762826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.762845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.763085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.763329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.763353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.763369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.766960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.776283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.776737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.776769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.776789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.777029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.777273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.777297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.777319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.780912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.790215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.790683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.790715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.790733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.790973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.791216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.791241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.791257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.794847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.804163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.804628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.804660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.804679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.804919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.805163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.805188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.805205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.808790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.818088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.818574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.818606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.818624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.818863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.819108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.819133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.819149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.822737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.832046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.832596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.832656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.832674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.832914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.833158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.833183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.833199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.836797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.846147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.846635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.846691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.846710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.846950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.847195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.847221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.847238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.850829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.860148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.860626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.860658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.860677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.860916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.276 [2024-07-26 11:37:03.861161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.276 [2024-07-26 11:37:03.861186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.276 [2024-07-26 11:37:03.861202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.276 [2024-07-26 11:37:03.864794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.276 [2024-07-26 11:37:03.874110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.276 [2024-07-26 11:37:03.874567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-07-26 11:37:03.874599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.276 [2024-07-26 11:37:03.874618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.276 [2024-07-26 11:37:03.874864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.277 [2024-07-26 11:37:03.875108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.277 [2024-07-26 11:37:03.875133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.277 [2024-07-26 11:37:03.875150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.277 [2024-07-26 11:37:03.878731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.277 [2024-07-26 11:37:03.888032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.277 [2024-07-26 11:37:03.888476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-07-26 11:37:03.888508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.277 [2024-07-26 11:37:03.888527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.277 [2024-07-26 11:37:03.888766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.277 [2024-07-26 11:37:03.889010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.277 [2024-07-26 11:37:03.889035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.277 [2024-07-26 11:37:03.889051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.277 [2024-07-26 11:37:03.892637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.277 [2024-07-26 11:37:03.901941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.277 [2024-07-26 11:37:03.902371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-07-26 11:37:03.902403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.277 [2024-07-26 11:37:03.902422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.277 [2024-07-26 11:37:03.902671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.277 [2024-07-26 11:37:03.902914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.277 [2024-07-26 11:37:03.902938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.277 [2024-07-26 11:37:03.902955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.277 [2024-07-26 11:37:03.906535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.277 [2024-07-26 11:37:03.915828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.277 [2024-07-26 11:37:03.916310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-07-26 11:37:03.916342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.277 [2024-07-26 11:37:03.916360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.277 [2024-07-26 11:37:03.916623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.277 [2024-07-26 11:37:03.916868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.277 [2024-07-26 11:37:03.916893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.277 [2024-07-26 11:37:03.916915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.277 [2024-07-26 11:37:03.920500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.536 [2024-07-26 11:37:03.929818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.536 [2024-07-26 11:37:03.930416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.536 [2024-07-26 11:37:03.930485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.536 [2024-07-26 11:37:03.930508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.536 [2024-07-26 11:37:03.930755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.536 [2024-07-26 11:37:03.930999] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.536 [2024-07-26 11:37:03.931025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.536 [2024-07-26 11:37:03.931042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.536 [2024-07-26 11:37:03.934643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.536 [2024-07-26 11:37:03.943748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.536 [2024-07-26 11:37:03.944331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.536 [2024-07-26 11:37:03.944394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.536 [2024-07-26 11:37:03.944415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.536 [2024-07-26 11:37:03.944671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.536 [2024-07-26 11:37:03.944916] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.536 [2024-07-26 11:37:03.944942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.536 [2024-07-26 11:37:03.944959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.536 [2024-07-26 11:37:03.948549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.536 [2024-07-26 11:37:03.957633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.536 [2024-07-26 11:37:03.958174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.536 [2024-07-26 11:37:03.958227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:03.958246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:03.958499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:03.958744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:03.958770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:03.958786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:03.962373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:03.971707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:03.972221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:03.972261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:03.972281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:03.972534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:03.972778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:03.972803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:03.972820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:03.976397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:03.985705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:03.986178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:03.986229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:03.986248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:03.986499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:03.986745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:03.986770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:03.986786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:03.990370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:03.999681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.000219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.000271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.000290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.000540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.000783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.000809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.000825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.004426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.013537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.013927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.013959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.013977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.014217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.014479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.014504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.014522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.018102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.027413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.027942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.027975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.027993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.028232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.028487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.028512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.028528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.032105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.041351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.041866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.041919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.041937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.042177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.042421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.042456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.042472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.046051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.055357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.055779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.055811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.055829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.056068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.056312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.056337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.056353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.059946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.069248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.069645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.069677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.069695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.069936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.070189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.070214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.070230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.073831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.083155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.083578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.537 [2024-07-26 11:37:04.083610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.537 [2024-07-26 11:37:04.083629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.537 [2024-07-26 11:37:04.083868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.537 [2024-07-26 11:37:04.084113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.537 [2024-07-26 11:37:04.084138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.537 [2024-07-26 11:37:04.084154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.537 [2024-07-26 11:37:04.087743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.537 [2024-07-26 11:37:04.097037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.537 [2024-07-26 11:37:04.097559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.097591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.097609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.097849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.098092] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.098117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.098133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.101722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.111035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.111436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.111468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.111496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.111736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.111980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.112004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.112021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.115609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.124922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.125438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.125480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.125498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.125737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.125981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.126007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.126023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.129617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.138916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.139400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.139440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.139461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.139705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.139948] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.139973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.139990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.143586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.152898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.153411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.153450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.153471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.153715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.153959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.153990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.154007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.157599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.166918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.167384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.167445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.167466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.167707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.167963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.167988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.168005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.171603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.180903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.181465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.181507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.181525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.181774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.182019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.182044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.182061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.538 [2024-07-26 11:37:04.185647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.538 [2024-07-26 11:37:04.194944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.538 [2024-07-26 11:37:04.195463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.538 [2024-07-26 11:37:04.195515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.538 [2024-07-26 11:37:04.195533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.538 [2024-07-26 11:37:04.195772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.538 [2024-07-26 11:37:04.196016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.538 [2024-07-26 11:37:04.196041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.538 [2024-07-26 11:37:04.196058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.199658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.208981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.209517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.209549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.209568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.209810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.210053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.210078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.210094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.213685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.222993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.223515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.223548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.223566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.223806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.224050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.224075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.224092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.227685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.236988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.237530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.237563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.237581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.237820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.238065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.238090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.238106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.241703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.251010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.251522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.251554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.251572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.251818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.252062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.252088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.252104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.255695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.264983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.265480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.265512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.265530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.265772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.266016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.266042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.266058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.269657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.278963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.279445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.279477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.279496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.279735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.279979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.280004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.280021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.283611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.292900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.293383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.293414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.293441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.293683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.293928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.293954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.293976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.297561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.306863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.307489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.307539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.307559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.307805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.308050] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.308075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.308091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.311683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.320769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.321226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.321260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.321279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.321532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.321776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.321801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.798 [2024-07-26 11:37:04.321817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.798 [2024-07-26 11:37:04.325392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.798 [2024-07-26 11:37:04.334708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.798 [2024-07-26 11:37:04.335161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.798 [2024-07-26 11:37:04.335195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.798 [2024-07-26 11:37:04.335213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.798 [2024-07-26 11:37:04.335465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.798 [2024-07-26 11:37:04.335710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.798 [2024-07-26 11:37:04.335735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.335751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.339329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.348626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.349084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.349116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.349135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.349374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.349629] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.349655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.349671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.353249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.362542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.362991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.363024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.363042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.363281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.363539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.363565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.363581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.367157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.376463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.376910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.376941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.376959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.377199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.377453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.377478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.377495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.381070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.390358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.390811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.390843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.390861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.391107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.391350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.391375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.391391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.394975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.404273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.404728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.404760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.404779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.405019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.405262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.405287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.405302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.408886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.418168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.418625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.418657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.418676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.418915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.419159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.419183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.419199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.422788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.432077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.432513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.432545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.432563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.432804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.433048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.433072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.433094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.436683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.799 [2024-07-26 11:37:04.445976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.799 [2024-07-26 11:37:04.446423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.799 [2024-07-26 11:37:04.446462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:08.799 [2024-07-26 11:37:04.446481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:08.799 [2024-07-26 11:37:04.446722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:08.799 [2024-07-26 11:37:04.446965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.799 [2024-07-26 11:37:04.446990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.799 [2024-07-26 11:37:04.447006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.799 [2024-07-26 11:37:04.450591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.059 [2024-07-26 11:37:04.459876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.059 [2024-07-26 11:37:04.460316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-07-26 11:37:04.460368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.059 [2024-07-26 11:37:04.460386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.059 [2024-07-26 11:37:04.460635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.059 [2024-07-26 11:37:04.460880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.059 [2024-07-26 11:37:04.460903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.059 [2024-07-26 11:37:04.460918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.059 [2024-07-26 11:37:04.464504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.059 [2024-07-26 11:37:04.473805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.059 [2024-07-26 11:37:04.474271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-07-26 11:37:04.474322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.059 [2024-07-26 11:37:04.474340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.059 [2024-07-26 11:37:04.474591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.059 [2024-07-26 11:37:04.474835] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.059 [2024-07-26 11:37:04.474860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.059 [2024-07-26 11:37:04.474876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.059 [2024-07-26 11:37:04.478462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.059 [2024-07-26 11:37:04.487755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.059 [2024-07-26 11:37:04.488178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-07-26 11:37:04.488215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.059 [2024-07-26 11:37:04.488235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.059 [2024-07-26 11:37:04.488485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.059 [2024-07-26 11:37:04.488729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.059 [2024-07-26 11:37:04.488754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.059 [2024-07-26 11:37:04.488771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.059 [2024-07-26 11:37:04.492541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.059 [2024-07-26 11:37:04.501629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.059 [2024-07-26 11:37:04.502107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-07-26 11:37:04.502157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.059 [2024-07-26 11:37:04.502175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.059 [2024-07-26 11:37:04.502416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.059 [2024-07-26 11:37:04.502670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.059 [2024-07-26 11:37:04.502695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.059 [2024-07-26 11:37:04.502711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.059 [2024-07-26 11:37:04.506304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.059 [2024-07-26 11:37:04.515605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.059 [2024-07-26 11:37:04.516029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.516061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.516080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.516319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.516575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.516600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.516616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.520190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.529480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.529906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.529937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.529955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.530194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.530455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.530480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.530497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.534072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.543378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.543803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.543836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.543854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.544093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.544337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.544362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.544378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.547965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.557251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.557712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.557744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.557762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.558001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.558244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.558269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.558285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.561873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.571164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.571591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.571623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.571641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.571881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.572124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.572149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.572164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.575757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.585043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.585486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.585518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.585536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.585776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.586019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.586044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.586060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.589647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.598939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.599382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.599413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.599441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.599683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.599926] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.599951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.599967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.603549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.612842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.613258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.613289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.613308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.613557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.613802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.613827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.613843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.617419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.626714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.627163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.627194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.627218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.627469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.627713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.627738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.627755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.631328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.060 [2024-07-26 11:37:04.640614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.060 [2024-07-26 11:37:04.641085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-07-26 11:37:04.641117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.060 [2024-07-26 11:37:04.641136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.060 [2024-07-26 11:37:04.641376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.060 [2024-07-26 11:37:04.641634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.060 [2024-07-26 11:37:04.641659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.060 [2024-07-26 11:37:04.641675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.060 [2024-07-26 11:37:04.645251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.061 [2024-07-26 11:37:04.654545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.061 [2024-07-26 11:37:04.655002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-07-26 11:37:04.655034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.061 [2024-07-26 11:37:04.655052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.061 [2024-07-26 11:37:04.655291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.061 [2024-07-26 11:37:04.655547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.061 [2024-07-26 11:37:04.655572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.061 [2024-07-26 11:37:04.655588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.061 [2024-07-26 11:37:04.659164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.061 [2024-07-26 11:37:04.668454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.061 [2024-07-26 11:37:04.668901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-07-26 11:37:04.668932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.061 [2024-07-26 11:37:04.668950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.061 [2024-07-26 11:37:04.669189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.061 [2024-07-26 11:37:04.669444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.061 [2024-07-26 11:37:04.669475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.061 [2024-07-26 11:37:04.669492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.061 [2024-07-26 11:37:04.673079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.061 [2024-07-26 11:37:04.682368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.061 [2024-07-26 11:37:04.682843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-07-26 11:37:04.682875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.061 [2024-07-26 11:37:04.682893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.061 [2024-07-26 11:37:04.683132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.061 [2024-07-26 11:37:04.683375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.061 [2024-07-26 11:37:04.683400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.061 [2024-07-26 11:37:04.683416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.061 [2024-07-26 11:37:04.686999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.061 [2024-07-26 11:37:04.696283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.061 [2024-07-26 11:37:04.696734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-07-26 11:37:04.696766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.061 [2024-07-26 11:37:04.696784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.061 [2024-07-26 11:37:04.697023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.061 [2024-07-26 11:37:04.697266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.061 [2024-07-26 11:37:04.697290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.061 [2024-07-26 11:37:04.697306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.061 [2024-07-26 11:37:04.700892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.061 [2024-07-26 11:37:04.710189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.061 [2024-07-26 11:37:04.710620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-07-26 11:37:04.710652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.061 [2024-07-26 11:37:04.710670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.061 [2024-07-26 11:37:04.710909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.061 [2024-07-26 11:37:04.711152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.061 [2024-07-26 11:37:04.711176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.061 [2024-07-26 11:37:04.711191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.061 [2024-07-26 11:37:04.714778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.320 [2024-07-26 11:37:04.724078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.724517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.724549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.724568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.724807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.725051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.725075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.725091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.728673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.737956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.738399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.738438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.738458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.738697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.738940] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.738965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.738981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.742570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.751868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.752326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.752377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.752396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.752648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.752891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.752915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.752931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.756512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.765796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.766245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.766276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.766294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.766550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.766794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.766819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.766835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.770411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.779713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.780149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.780181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.780199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.780449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.780693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.780718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.780734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.784309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.793600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.794032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.794073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.794091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.794329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.794585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.794610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.794626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.798200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.807498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.807946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.807977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.807995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.808234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.808489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.808515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.808539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.812116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.821401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.821855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.821887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.821905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.822144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.822387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.822412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.822437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.826017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.835305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.835735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.835767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.835786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.836025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.836268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.836292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.836308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.839893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.849182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.321 [2024-07-26 11:37:04.849647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.321 [2024-07-26 11:37:04.849679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.321 [2024-07-26 11:37:04.849697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.321 [2024-07-26 11:37:04.849936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.321 [2024-07-26 11:37:04.850180] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.321 [2024-07-26 11:37:04.850205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.321 [2024-07-26 11:37:04.850221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.321 [2024-07-26 11:37:04.853806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.321 [2024-07-26 11:37:04.863094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.863543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.863575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.863594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.863833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.864076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.864100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.864117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.867719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.877012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.877464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.877497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.877516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.877756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.877999] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.878024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.878040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.881626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.890909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.891351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.891382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.891400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.891650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.891895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.891919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.891935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.895520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.904819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.905269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.905300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.905318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.905569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.905820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.905845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.905862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.909445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.918730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.919163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.919194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.919212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.919463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.919707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.919732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.919748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.923323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.932618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.933040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.933071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.933090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.933329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.933582] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.933607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.933624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.937198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.946499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.946947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.946979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.946997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.947237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.947491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.947525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.947542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.951125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.960410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.960868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.960901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.960919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.961160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.961403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.961436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.961455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.965034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.322 [2024-07-26 11:37:04.974336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.322 [2024-07-26 11:37:04.974772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.322 [2024-07-26 11:37:04.974804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.322 [2024-07-26 11:37:04.974823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.322 [2024-07-26 11:37:04.975063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.322 [2024-07-26 11:37:04.975307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.322 [2024-07-26 11:37:04.975331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.322 [2024-07-26 11:37:04.975347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.322 [2024-07-26 11:37:04.978934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.582 [2024-07-26 11:37:04.988225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.582 [2024-07-26 11:37:04.988670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.582 [2024-07-26 11:37:04.988702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.582 [2024-07-26 11:37:04.988720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.582 [2024-07-26 11:37:04.988959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.582 [2024-07-26 11:37:04.989203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.582 [2024-07-26 11:37:04.989227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.582 [2024-07-26 11:37:04.989243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.582 [2024-07-26 11:37:04.992831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.582 [2024-07-26 11:37:05.002169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.582 [2024-07-26 11:37:05.002628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.582 [2024-07-26 11:37:05.002661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.582 [2024-07-26 11:37:05.002685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.582 [2024-07-26 11:37:05.002926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.582 [2024-07-26 11:37:05.003169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.582 [2024-07-26 11:37:05.003194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.582 [2024-07-26 11:37:05.003210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.582 [2024-07-26 11:37:05.006816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.582 [2024-07-26 11:37:05.016113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.582 [2024-07-26 11:37:05.016521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.582 [2024-07-26 11:37:05.016554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.582 [2024-07-26 11:37:05.016573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.582 [2024-07-26 11:37:05.016813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.582 [2024-07-26 11:37:05.017057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.582 [2024-07-26 11:37:05.017082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.582 [2024-07-26 11:37:05.017098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.582 [2024-07-26 11:37:05.020682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.582 [2024-07-26 11:37:05.029974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.582 [2024-07-26 11:37:05.030438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.582 [2024-07-26 11:37:05.030470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.582 [2024-07-26 11:37:05.030490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.582 [2024-07-26 11:37:05.030730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.582 [2024-07-26 11:37:05.030974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.030999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.031016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.034603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.043899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.044351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.044382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.044401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.044649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.044900] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.044925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.044941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.048525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.057815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.058259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.058312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.058330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.058578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.058823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.058847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.058863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.062446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.071746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.072222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.072275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.072293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.072541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.072785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.072810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.072826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.076433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.085731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.086184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.086216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.086234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.086485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.086729] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.086754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.086770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.090346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.099646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.100125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.100177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.100196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.100443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.100687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.100712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.100727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.104305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.113609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.114037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.114068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.114087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.114326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.114581] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.114606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.114623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.118199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.127511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.127960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.127991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.128010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.128249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.128504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.128529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.128545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.132122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.141411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.141856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.141887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.141911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.142151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.142394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.142418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.142449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.146027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.155310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.155797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.155830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.155848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.156088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.156331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.156356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.583 [2024-07-26 11:37:05.156372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.583 [2024-07-26 11:37:05.159956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.583 [2024-07-26 11:37:05.169253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.583 [2024-07-26 11:37:05.169723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.583 [2024-07-26 11:37:05.169755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.583 [2024-07-26 11:37:05.169773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.583 [2024-07-26 11:37:05.170012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.583 [2024-07-26 11:37:05.170255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.583 [2024-07-26 11:37:05.170280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.584 [2024-07-26 11:37:05.170296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.584 [2024-07-26 11:37:05.173890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.584 [2024-07-26 11:37:05.183186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.584 [2024-07-26 11:37:05.183628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.584 [2024-07-26 11:37:05.183660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.584 [2024-07-26 11:37:05.183678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.584 [2024-07-26 11:37:05.183918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.584 [2024-07-26 11:37:05.184161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.584 [2024-07-26 11:37:05.184190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.584 [2024-07-26 11:37:05.184207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.584 [2024-07-26 11:37:05.187794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.584 [2024-07-26 11:37:05.197112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.584 [2024-07-26 11:37:05.197562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.584 [2024-07-26 11:37:05.197594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.584 [2024-07-26 11:37:05.197613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.584 [2024-07-26 11:37:05.197853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.584 [2024-07-26 11:37:05.198096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.584 [2024-07-26 11:37:05.198120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.584 [2024-07-26 11:37:05.198136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.584 [2024-07-26 11:37:05.201748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.584 [2024-07-26 11:37:05.211079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.584 [2024-07-26 11:37:05.211504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.584 [2024-07-26 11:37:05.211536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.584 [2024-07-26 11:37:05.211555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.584 [2024-07-26 11:37:05.211795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.584 [2024-07-26 11:37:05.212038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.584 [2024-07-26 11:37:05.212063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.584 [2024-07-26 11:37:05.212079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.584 [2024-07-26 11:37:05.215677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.584 [2024-07-26 11:37:05.224992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.584 [2024-07-26 11:37:05.225466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.584 [2024-07-26 11:37:05.225499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.584 [2024-07-26 11:37:05.225518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.584 [2024-07-26 11:37:05.225758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.584 [2024-07-26 11:37:05.226002] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.584 [2024-07-26 11:37:05.226028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.584 [2024-07-26 11:37:05.226044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.584 [2024-07-26 11:37:05.229642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.584 [2024-07-26 11:37:05.238965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.584 [2024-07-26 11:37:05.239511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.584 [2024-07-26 11:37:05.239543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.584 [2024-07-26 11:37:05.239561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.584 [2024-07-26 11:37:05.239800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.584 [2024-07-26 11:37:05.240044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.584 [2024-07-26 11:37:05.240070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.584 [2024-07-26 11:37:05.240087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.843 [2024-07-26 11:37:05.243698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.843 [2024-07-26 11:37:05.253016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.843 [2024-07-26 11:37:05.253519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-07-26 11:37:05.253551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.843 [2024-07-26 11:37:05.253569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.843 [2024-07-26 11:37:05.253809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.843 [2024-07-26 11:37:05.254053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.843 [2024-07-26 11:37:05.254078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.843 [2024-07-26 11:37:05.254094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.843 [2024-07-26 11:37:05.257696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.843 [2024-07-26 11:37:05.267021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.843 [2024-07-26 11:37:05.267495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-07-26 11:37:05.267527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.843 [2024-07-26 11:37:05.267545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.843 [2024-07-26 11:37:05.267786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.843 [2024-07-26 11:37:05.268031] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.843 [2024-07-26 11:37:05.268057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.843 [2024-07-26 11:37:05.268073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.843 [2024-07-26 11:37:05.271671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.843 [2024-07-26 11:37:05.280979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.843 [2024-07-26 11:37:05.281498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-07-26 11:37:05.281531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.843 [2024-07-26 11:37:05.281550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.843 [2024-07-26 11:37:05.281795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.843 [2024-07-26 11:37:05.282038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.843 [2024-07-26 11:37:05.282064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.843 [2024-07-26 11:37:05.282080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.843 [2024-07-26 11:37:05.285670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.843 [2024-07-26 11:37:05.294968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.843 [2024-07-26 11:37:05.295504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-07-26 11:37:05.295537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.843 [2024-07-26 11:37:05.295555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.843 [2024-07-26 11:37:05.295794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.843 [2024-07-26 11:37:05.296037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.843 [2024-07-26 11:37:05.296062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.843 [2024-07-26 11:37:05.296078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.843 [2024-07-26 11:37:05.299664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.843 [2024-07-26 11:37:05.308966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.843 [2024-07-26 11:37:05.309473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.843 [2024-07-26 11:37:05.309506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.843 [2024-07-26 11:37:05.309524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.843 [2024-07-26 11:37:05.309763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.843 [2024-07-26 11:37:05.310006] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.843 [2024-07-26 11:37:05.310030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.843 [2024-07-26 11:37:05.310046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.843 [2024-07-26 11:37:05.313632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.843 [2024-07-26 11:37:05.322930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.323449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.323519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.323538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.323777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.324021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.324046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.324068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.327655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.336945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.337486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.337539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.337558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.337799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.338043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.338068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.338084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.341678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.350976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.351485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.351517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.351535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.351773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.352018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.352043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.352060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.355645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.364945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.365551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.365613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.365633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.365880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.366124] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.366149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.366166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.369764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.378871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.379387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.379438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.379461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.379703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.379947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.379973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.379989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.383581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.392883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.393460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.393493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.393512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.393752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.393997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.394023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.394039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.397630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.406936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.407472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.407505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.407523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.407762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.408007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.408033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.408050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.411638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.420937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.421473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.421505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.421524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.421764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.422013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.422038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.422055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.425643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.434940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.435448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.435480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.435499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.435739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.435981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.436006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.436023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.439617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.448923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.449504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.844 [2024-07-26 11:37:05.449566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.844 [2024-07-26 11:37:05.449587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.844 [2024-07-26 11:37:05.449834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.844 [2024-07-26 11:37:05.450078] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.844 [2024-07-26 11:37:05.450104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.844 [2024-07-26 11:37:05.450120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.844 [2024-07-26 11:37:05.453717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.844 [2024-07-26 11:37:05.462810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.844 [2024-07-26 11:37:05.463446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.845 [2024-07-26 11:37:05.463491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.845 [2024-07-26 11:37:05.463512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.845 [2024-07-26 11:37:05.463759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.845 [2024-07-26 11:37:05.464003] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.845 [2024-07-26 11:37:05.464031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.845 [2024-07-26 11:37:05.464047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.845 [2024-07-26 11:37:05.467646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.845 [2024-07-26 11:37:05.476770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.845 [2024-07-26 11:37:05.477282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.845 [2024-07-26 11:37:05.477334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.845 [2024-07-26 11:37:05.477353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.845 [2024-07-26 11:37:05.477606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.845 [2024-07-26 11:37:05.477851] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.845 [2024-07-26 11:37:05.477877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.845 [2024-07-26 11:37:05.477893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.845 [2024-07-26 11:37:05.481486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:09.845 [2024-07-26 11:37:05.490779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:09.845 [2024-07-26 11:37:05.491310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.845 [2024-07-26 11:37:05.491344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:09.845 [2024-07-26 11:37:05.491363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:09.845 [2024-07-26 11:37:05.491629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:09.845 [2024-07-26 11:37:05.491873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:09.845 [2024-07-26 11:37:05.491899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:09.845 [2024-07-26 11:37:05.491915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:09.845 [2024-07-26 11:37:05.495501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.104 [2024-07-26 11:37:05.504803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.104 [2024-07-26 11:37:05.505308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-07-26 11:37:05.505341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.104 [2024-07-26 11:37:05.505359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.104 [2024-07-26 11:37:05.505613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.104 [2024-07-26 11:37:05.505856] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.104 [2024-07-26 11:37:05.505881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.104 [2024-07-26 11:37:05.505898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.104 [2024-07-26 11:37:05.509501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.104 [2024-07-26 11:37:05.518733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.104 [2024-07-26 11:37:05.519232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-07-26 11:37:05.519273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.104 [2024-07-26 11:37:05.519300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.104 [2024-07-26 11:37:05.519559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.104 [2024-07-26 11:37:05.519803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.104 [2024-07-26 11:37:05.519829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.104 [2024-07-26 11:37:05.519845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.104 [2024-07-26 11:37:05.523425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.104 [2024-07-26 11:37:05.532723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.104 [2024-07-26 11:37:05.533250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-07-26 11:37:05.533286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.104 [2024-07-26 11:37:05.533305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.104 [2024-07-26 11:37:05.533559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.104 [2024-07-26 11:37:05.533803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.104 [2024-07-26 11:37:05.533829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.533846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.537426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.546739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.547358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.547404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.547424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.547688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.547932] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.547957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.547975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.551569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.560659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.561182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.561216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.561235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.561489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.561733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.561765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.561782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.565361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.574680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.575108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.575141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.575159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.575399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.575655] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.575681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.575697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.579280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.588588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.589079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.589112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.589130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.589370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.589629] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.589656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.589672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.593250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.602553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.603069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.603101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.603119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.603359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.603620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.603646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.603663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.607258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.616571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.617061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.617094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.617112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.617352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.617621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.617647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.617664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.621242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.630537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.631062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.631095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.631114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.631355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.631613] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.631639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.631655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.635235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.644543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.645069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.645102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.645120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.645360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.645619] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.645645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.645660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.649240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.658542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.659050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.659082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.659106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.659346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.659603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.105 [2024-07-26 11:37:05.659630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.105 [2024-07-26 11:37:05.659647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.105 [2024-07-26 11:37:05.663223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.105 [2024-07-26 11:37:05.672545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.105 [2024-07-26 11:37:05.673041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-07-26 11:37:05.673072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.105 [2024-07-26 11:37:05.673090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.105 [2024-07-26 11:37:05.673330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.105 [2024-07-26 11:37:05.673589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.673615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.673631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.677211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.106 [2024-07-26 11:37:05.686530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.106 [2024-07-26 11:37:05.687144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-07-26 11:37:05.687191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.106 [2024-07-26 11:37:05.687212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.106 [2024-07-26 11:37:05.687476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.106 [2024-07-26 11:37:05.687721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.687747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.687764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.691345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.106 [2024-07-26 11:37:05.700475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.106 [2024-07-26 11:37:05.701048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-07-26 11:37:05.701100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.106 [2024-07-26 11:37:05.701119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.106 [2024-07-26 11:37:05.701360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.106 [2024-07-26 11:37:05.701617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.701644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.701667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.705251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.106 [2024-07-26 11:37:05.714357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.106 [2024-07-26 11:37:05.714969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-07-26 11:37:05.715019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.106 [2024-07-26 11:37:05.715038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.106 [2024-07-26 11:37:05.715278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.106 [2024-07-26 11:37:05.715535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.715562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.715579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.719155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.106 [2024-07-26 11:37:05.728247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.106 [2024-07-26 11:37:05.728889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-07-26 11:37:05.728936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.106 [2024-07-26 11:37:05.728956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.106 [2024-07-26 11:37:05.729202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.106 [2024-07-26 11:37:05.729463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.729490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.729508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.733090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.106 [2024-07-26 11:37:05.742186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.106 [2024-07-26 11:37:05.742734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-07-26 11:37:05.742769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.106 [2024-07-26 11:37:05.742789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.106 [2024-07-26 11:37:05.743030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.106 [2024-07-26 11:37:05.743274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.743299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.743315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.746913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.106 [2024-07-26 11:37:05.756215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.106 [2024-07-26 11:37:05.756699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-07-26 11:37:05.756741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.106 [2024-07-26 11:37:05.756760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.106 [2024-07-26 11:37:05.757000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.106 [2024-07-26 11:37:05.757243] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.106 [2024-07-26 11:37:05.757268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.106 [2024-07-26 11:37:05.757285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.106 [2024-07-26 11:37:05.760881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.365 [2024-07-26 11:37:05.770181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.365 [2024-07-26 11:37:05.770698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-07-26 11:37:05.770731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.365 [2024-07-26 11:37:05.770750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.365 [2024-07-26 11:37:05.770990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.365 [2024-07-26 11:37:05.771233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.365 [2024-07-26 11:37:05.771259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.365 [2024-07-26 11:37:05.771275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.365 [2024-07-26 11:37:05.774874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.365 [2024-07-26 11:37:05.784171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.365 [2024-07-26 11:37:05.784817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-07-26 11:37:05.784863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.365 [2024-07-26 11:37:05.784884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.365 [2024-07-26 11:37:05.785131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.365 [2024-07-26 11:37:05.785375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.365 [2024-07-26 11:37:05.785401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.365 [2024-07-26 11:37:05.785418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.365 [2024-07-26 11:37:05.789019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.365 [2024-07-26 11:37:05.798121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.365 [2024-07-26 11:37:05.798646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-07-26 11:37:05.798681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.365 [2024-07-26 11:37:05.798700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.365 [2024-07-26 11:37:05.798947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.365 [2024-07-26 11:37:05.799192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.365 [2024-07-26 11:37:05.799217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.365 [2024-07-26 11:37:05.799234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.365 [2024-07-26 11:37:05.802830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.365 [2024-07-26 11:37:05.812136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.365 [2024-07-26 11:37:05.812657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-07-26 11:37:05.812691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.365 [2024-07-26 11:37:05.812710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.365 [2024-07-26 11:37:05.812951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.365 [2024-07-26 11:37:05.813193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.365 [2024-07-26 11:37:05.813219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.365 [2024-07-26 11:37:05.813235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.365 [2024-07-26 11:37:05.816830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.365 [2024-07-26 11:37:05.826125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.365 [2024-07-26 11:37:05.826748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.365 [2024-07-26 11:37:05.826795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.365 [2024-07-26 11:37:05.826815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.365 [2024-07-26 11:37:05.827061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.365 [2024-07-26 11:37:05.827306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.365 [2024-07-26 11:37:05.827331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.827348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.830946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.840036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.840573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.840607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.840626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.840866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.841110] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.841136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.841158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.844760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.854052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.854576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.854610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.854629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.854869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.855112] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.855137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.855154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.858746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.868039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.868648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.868694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.868716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.868963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.869207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.869233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.869250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.872855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.881946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.882475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.882509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.882528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.882768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.883011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.883036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.883052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.886645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.895985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.896462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.896501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.896520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.896761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.897005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.897031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.897048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.900638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.909946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.910478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.910511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.910530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.910770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.911013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.911038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.911055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.914646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.923941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.924618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.924665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.924685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.924932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.925176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.925201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.925217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.928816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.937908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.938397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.938441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.938462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.938708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.938958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.938984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.939000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.942593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.951895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.952454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.952488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.952507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.952747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.952990] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.953015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.366 [2024-07-26 11:37:05.953032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.366 [2024-07-26 11:37:05.956641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.366 [2024-07-26 11:37:05.965940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.366 [2024-07-26 11:37:05.966440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.366 [2024-07-26 11:37:05.966473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.366 [2024-07-26 11:37:05.966492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.366 [2024-07-26 11:37:05.966731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.366 [2024-07-26 11:37:05.966974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.366 [2024-07-26 11:37:05.967000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.367 [2024-07-26 11:37:05.967016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.367 [2024-07-26 11:37:05.970603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.367 [2024-07-26 11:37:05.979909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.367 [2024-07-26 11:37:05.980446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-07-26 11:37:05.980479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.367 [2024-07-26 11:37:05.980497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.367 [2024-07-26 11:37:05.980744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.367 [2024-07-26 11:37:05.980987] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.367 [2024-07-26 11:37:05.981012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.367 [2024-07-26 11:37:05.981029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.367 [2024-07-26 11:37:05.984620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.367 [2024-07-26 11:37:05.993912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.367 [2024-07-26 11:37:05.994458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-07-26 11:37:05.994492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.367 [2024-07-26 11:37:05.994511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.367 [2024-07-26 11:37:05.994751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.367 [2024-07-26 11:37:05.994994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.367 [2024-07-26 11:37:05.995019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.367 [2024-07-26 11:37:05.995036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.367 [2024-07-26 11:37:05.998629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.367 [2024-07-26 11:37:06.007925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.367 [2024-07-26 11:37:06.008458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-07-26 11:37:06.008490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.367 [2024-07-26 11:37:06.008509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.367 [2024-07-26 11:37:06.008758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.367 [2024-07-26 11:37:06.009002] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.367 [2024-07-26 11:37:06.009028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.367 [2024-07-26 11:37:06.009044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.367 [2024-07-26 11:37:06.012635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.367 [2024-07-26 11:37:06.022053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.367 [2024-07-26 11:37:06.022568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.367 [2024-07-26 11:37:06.022602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.367 [2024-07-26 11:37:06.022620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.367 [2024-07-26 11:37:06.022860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.367 [2024-07-26 11:37:06.023103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.367 [2024-07-26 11:37:06.023129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.367 [2024-07-26 11:37:06.023145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.026736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.036044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.626 [2024-07-26 11:37:06.036572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-07-26 11:37:06.036605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.626 [2024-07-26 11:37:06.036629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.626 [2024-07-26 11:37:06.036872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.626 [2024-07-26 11:37:06.037116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.626 [2024-07-26 11:37:06.037141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.626 [2024-07-26 11:37:06.037158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.040754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.050072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.626 [2024-07-26 11:37:06.050554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-07-26 11:37:06.050587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.626 [2024-07-26 11:37:06.050605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.626 [2024-07-26 11:37:06.050845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.626 [2024-07-26 11:37:06.051090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.626 [2024-07-26 11:37:06.051115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.626 [2024-07-26 11:37:06.051132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.054734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.064042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.626 [2024-07-26 11:37:06.064542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-07-26 11:37:06.064574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.626 [2024-07-26 11:37:06.064592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.626 [2024-07-26 11:37:06.064832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.626 [2024-07-26 11:37:06.065075] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.626 [2024-07-26 11:37:06.065100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.626 [2024-07-26 11:37:06.065116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.068709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.078020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.626 [2024-07-26 11:37:06.078509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-07-26 11:37:06.078542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.626 [2024-07-26 11:37:06.078560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.626 [2024-07-26 11:37:06.078801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.626 [2024-07-26 11:37:06.079043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.626 [2024-07-26 11:37:06.079075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.626 [2024-07-26 11:37:06.079093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.082678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.091989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.626 [2024-07-26 11:37:06.092472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-07-26 11:37:06.092505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.626 [2024-07-26 11:37:06.092524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.626 [2024-07-26 11:37:06.092763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.626 [2024-07-26 11:37:06.093007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.626 [2024-07-26 11:37:06.093033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.626 [2024-07-26 11:37:06.093050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.096644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.105948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.626 [2024-07-26 11:37:06.106449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.626 [2024-07-26 11:37:06.106481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.626 [2024-07-26 11:37:06.106500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.626 [2024-07-26 11:37:06.106739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.626 [2024-07-26 11:37:06.106983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.626 [2024-07-26 11:37:06.107007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.626 [2024-07-26 11:37:06.107024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.626 [2024-07-26 11:37:06.110628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.626 [2024-07-26 11:37:06.119932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.120380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.120412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.120439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.120681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.120925] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.120950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.120967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.124549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.133843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.134290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.134321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.134339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.134588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.134832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.134857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.134873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.138455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.147751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.148239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.148271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.148289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.148540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.148783] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.148808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.148825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.152399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.161714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.162337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.162382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.162403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.162663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.162908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.162934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.162952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.166541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.175665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.176171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.176204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.176224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.176484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.176728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.176753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.176770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.180354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.189670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.190240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.190293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.190312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.190567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.190813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.190839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.190856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.194453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.203556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.204042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.204076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.204094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.204334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.204592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.204618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.204635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.208218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.217548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.217983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.218035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.218054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.218293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.218549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.218575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.218600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.222184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.231512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.231995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.232045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.232066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.232305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.232564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.232590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.232606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.236196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.627 [2024-07-26 11:37:06.245525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.627 [2024-07-26 11:37:06.246093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.627 [2024-07-26 11:37:06.246144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.627 [2024-07-26 11:37:06.246162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.627 [2024-07-26 11:37:06.246401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.627 [2024-07-26 11:37:06.246656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.627 [2024-07-26 11:37:06.246693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.627 [2024-07-26 11:37:06.246710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.627 [2024-07-26 11:37:06.250306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.628 [2024-07-26 11:37:06.259407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.628 [2024-07-26 11:37:06.259939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-07-26 11:37:06.259992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.628 [2024-07-26 11:37:06.260011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.628 [2024-07-26 11:37:06.260251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.628 [2024-07-26 11:37:06.260507] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.628 [2024-07-26 11:37:06.260533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.628 [2024-07-26 11:37:06.260549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.628 [2024-07-26 11:37:06.264133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.628 [2024-07-26 11:37:06.273266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.628 [2024-07-26 11:37:06.273887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.628 [2024-07-26 11:37:06.273934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.628 [2024-07-26 11:37:06.273955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.628 [2024-07-26 11:37:06.274201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.628 [2024-07-26 11:37:06.274461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.628 [2024-07-26 11:37:06.274487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.628 [2024-07-26 11:37:06.274504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.628 [2024-07-26 11:37:06.278088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.887 [2024-07-26 11:37:06.287195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.887 [2024-07-26 11:37:06.287722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-07-26 11:37:06.287756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.887 [2024-07-26 11:37:06.287774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.887 [2024-07-26 11:37:06.288014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.887 [2024-07-26 11:37:06.288257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.887 [2024-07-26 11:37:06.288283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.887 [2024-07-26 11:37:06.288300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.887 [2024-07-26 11:37:06.291897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.887 [2024-07-26 11:37:06.301204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.887 [2024-07-26 11:37:06.301753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-07-26 11:37:06.301805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.887 [2024-07-26 11:37:06.301824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.887 [2024-07-26 11:37:06.302063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.887 [2024-07-26 11:37:06.302306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.887 [2024-07-26 11:37:06.302331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.887 [2024-07-26 11:37:06.302348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.887 [2024-07-26 11:37:06.305943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.887 [2024-07-26 11:37:06.315260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.887 [2024-07-26 11:37:06.315853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-07-26 11:37:06.315906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.887 [2024-07-26 11:37:06.315925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.887 [2024-07-26 11:37:06.316171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.887 [2024-07-26 11:37:06.316414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.887 [2024-07-26 11:37:06.316455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.887 [2024-07-26 11:37:06.316473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.887 [2024-07-26 11:37:06.320055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.887 [2024-07-26 11:37:06.329150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.887 [2024-07-26 11:37:06.329663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-07-26 11:37:06.329696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.887 [2024-07-26 11:37:06.329714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.887 [2024-07-26 11:37:06.329954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.887 [2024-07-26 11:37:06.330199] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.330223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.330240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.333828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.343123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.343603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.343637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.343656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.343896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.344141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.344166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.344182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.347771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.357112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.357584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.357617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.357635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.357875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.358120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.358145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.358167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.361759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.371044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.371574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.371629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.371648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.371887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.372132] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.372157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.372174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.375768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.385067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.385592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.385644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.385662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.385902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.386145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.386170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.386186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.389776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.399074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.399520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.399552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.399571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.399810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.400053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.400078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.400094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.403680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.412987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.413442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.413479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.413498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.413738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.413982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.414006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.414022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.417612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.426937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.427358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.427390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.427408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.427659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.427903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.427928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.427944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.431539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 [2024-07-26 11:37:06.440843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.441265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.441297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.441315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.441567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.441812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.441837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.441852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 [2024-07-26 11:37:06.445447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2231383 Killed "${NVMF_APP[@]}" "$@" 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.888 [2024-07-26 11:37:06.454752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.888 [2024-07-26 11:37:06.455184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-07-26 11:37:06.455217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.888 [2024-07-26 11:37:06.455236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.888 [2024-07-26 11:37:06.455489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.888 [2024-07-26 11:37:06.455733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.888 [2024-07-26 11:37:06.455759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.888 [2024-07-26 11:37:06.455775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2232451 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:10.888 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2232451 00:29:10.889 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2232451 ']' 00:29:10.889 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.889 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.889 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.889 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.889 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.889 [2024-07-26 11:37:06.459353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.889 [2024-07-26 11:37:06.468657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.889 [2024-07-26 11:37:06.469083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-07-26 11:37:06.469134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.889 [2024-07-26 11:37:06.469153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.889 [2024-07-26 11:37:06.469393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.889 [2024-07-26 11:37:06.469647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.889 [2024-07-26 11:37:06.469673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.889 [2024-07-26 11:37:06.469688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.889 [2024-07-26 11:37:06.473277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.889 [2024-07-26 11:37:06.482581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.889 [2024-07-26 11:37:06.482992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-07-26 11:37:06.483023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.889 [2024-07-26 11:37:06.483041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.889 [2024-07-26 11:37:06.483280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.889 [2024-07-26 11:37:06.483539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.889 [2024-07-26 11:37:06.483564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.889 [2024-07-26 11:37:06.483579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.889 [2024-07-26 11:37:06.487159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.889 [2024-07-26 11:37:06.496473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.889 [2024-07-26 11:37:06.496924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-07-26 11:37:06.496955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.889 [2024-07-26 11:37:06.496973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.889 [2024-07-26 11:37:06.497211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.889 [2024-07-26 11:37:06.497465] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.889 [2024-07-26 11:37:06.497489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.889 [2024-07-26 11:37:06.497505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.889 [2024-07-26 11:37:06.501080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.889 [2024-07-26 11:37:06.509013] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:10.889 [2024-07-26 11:37:06.509104] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.889 [2024-07-26 11:37:06.510380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.889 [2024-07-26 11:37:06.510812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-07-26 11:37:06.510844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.889 [2024-07-26 11:37:06.510862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.889 [2024-07-26 11:37:06.511101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.889 [2024-07-26 11:37:06.511344] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.889 [2024-07-26 11:37:06.511368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.889 [2024-07-26 11:37:06.511384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.889 [2024-07-26 11:37:06.514969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.889 [2024-07-26 11:37:06.524254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.889 [2024-07-26 11:37:06.524685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-07-26 11:37:06.524716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.889 [2024-07-26 11:37:06.524734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.889 [2024-07-26 11:37:06.524972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.889 [2024-07-26 11:37:06.525221] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.889 [2024-07-26 11:37:06.525245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.889 [2024-07-26 11:37:06.525261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.889 [2024-07-26 11:37:06.528850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:10.889 [2024-07-26 11:37:06.538327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:10.889 [2024-07-26 11:37:06.538763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-07-26 11:37:06.538795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:10.889 [2024-07-26 11:37:06.538813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:10.889 [2024-07-26 11:37:06.539053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:10.889 [2024-07-26 11:37:06.539295] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:10.889 [2024-07-26 11:37:06.539319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:10.889 [2024-07-26 11:37:06.539334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:10.889 [2024-07-26 11:37:06.542926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.149 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.149 [2024-07-26 11:37:06.552222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.149 [2024-07-26 11:37:06.552675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.149 [2024-07-26 11:37:06.552706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.149 [2024-07-26 11:37:06.552724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.149 [2024-07-26 11:37:06.552963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.149 [2024-07-26 11:37:06.553206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.149 [2024-07-26 11:37:06.553229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.149 [2024-07-26 11:37:06.553248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.149 [2024-07-26 11:37:06.556832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.149 [2024-07-26 11:37:06.566131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.149 [2024-07-26 11:37:06.566546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.149 [2024-07-26 11:37:06.566579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.149 [2024-07-26 11:37:06.566597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.149 [2024-07-26 11:37:06.566836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.149 [2024-07-26 11:37:06.567079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.149 [2024-07-26 11:37:06.567102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.149 [2024-07-26 11:37:06.567117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.149 [2024-07-26 11:37:06.570709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.149 [2024-07-26 11:37:06.580012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.149 [2024-07-26 11:37:06.580448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.149 [2024-07-26 11:37:06.580479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.149 [2024-07-26 11:37:06.580497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.149 [2024-07-26 11:37:06.580735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.149 [2024-07-26 11:37:06.580978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.149 [2024-07-26 11:37:06.581002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.581017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.584600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.587204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:11.150 [2024-07-26 11:37:06.593919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.594418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.594460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.594480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.594723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.594967] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.594991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.595008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.598594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.607899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.608410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.608456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.608478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.608736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.608981] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.609005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.609022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.612632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.621925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.622369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.622413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.622442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.622684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.622927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.622951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.622968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.626551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.635848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.636304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.636336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.636354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.636605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.636849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.636873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.636889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.640473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.649773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.650237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.650268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.650286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.650535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.650779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.650803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.650819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.654400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.663720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.664266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.664309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.664331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.664589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.664849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.664874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.664892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.668477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.677783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.678247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.678279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.678298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.678550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.678793] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.678817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.678833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.682408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.691703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.692157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.692189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.692207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.692459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.692703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.692728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.692743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.696317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.150 [2024-07-26 11:37:06.705609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.150 [2024-07-26 11:37:06.706035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.150 [2024-07-26 11:37:06.706066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.150 [2024-07-26 11:37:06.706083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.150 [2024-07-26 11:37:06.706323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.150 [2024-07-26 11:37:06.706576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.150 [2024-07-26 11:37:06.706601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.150 [2024-07-26 11:37:06.706617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.150 [2024-07-26 11:37:06.709242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.150 [2024-07-26 11:37:06.709279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.150 [2024-07-26 11:37:06.709296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.150 [2024-07-26 11:37:06.709310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.150 [2024-07-26 11:37:06.709322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.150 [2024-07-26 11:37:06.709559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.151 [2024-07-26 11:37:06.709590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.151 [2024-07-26 11:37:06.709594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.151 [2024-07-26 11:37:06.710201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.719530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.720073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.720115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.720136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.720385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.720641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.720667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.720685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.724266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.733589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.734102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.734145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.734166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.734415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.734670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.734697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.734715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.738297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.747629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.748213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.748275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.748297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.748559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.748822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.748847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.748865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.752450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.761561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.762126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.762172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.762193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.762453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.762700] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.762725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.762744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.766317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.775634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.776141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.776198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.776219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.776477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.776723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.776747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.776765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.780340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.789660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.790216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.790262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.790283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.790542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.790790] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.790814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.790832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.794421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.151 [2024-07-26 11:37:06.803714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.151 [2024-07-26 11:37:06.804161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.151 [2024-07-26 11:37:06.804192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.151 [2024-07-26 11:37:06.804209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.151 [2024-07-26 11:37:06.804460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.151 [2024-07-26 11:37:06.804703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.151 [2024-07-26 11:37:06.804726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.151 [2024-07-26 11:37:06.804742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.151 [2024-07-26 11:37:06.808314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.410 [2024-07-26 11:37:06.817631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.410 [2024-07-26 11:37:06.818055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-07-26 11:37:06.818086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.410 [2024-07-26 11:37:06.818104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.410 [2024-07-26 11:37:06.818344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.410 [2024-07-26 11:37:06.818599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.410 [2024-07-26 11:37:06.818624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.410 [2024-07-26 11:37:06.818639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.410 [2024-07-26 11:37:06.822214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.410 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.410 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:11.410 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.410 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.410 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.410 [2024-07-26 11:37:06.831507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.410 [2024-07-26 11:37:06.831953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-07-26 11:37:06.831984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.410 [2024-07-26 11:37:06.832002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.410 [2024-07-26 11:37:06.832240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.410 [2024-07-26 11:37:06.832494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.410 [2024-07-26 11:37:06.832518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.410 [2024-07-26 11:37:06.832534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.410 [2024-07-26 11:37:06.836117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.410 [2024-07-26 11:37:06.845418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.410 [2024-07-26 11:37:06.845852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-07-26 11:37:06.845883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.410 [2024-07-26 11:37:06.845901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.410 [2024-07-26 11:37:06.846138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.410 [2024-07-26 11:37:06.846381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.410 [2024-07-26 11:37:06.846406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.410 [2024-07-26 11:37:06.846421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.410 [2024-07-26 11:37:06.850010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.411 [2024-07-26 11:37:06.856816] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.411 [2024-07-26 11:37:06.859301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-26 11:37:06.859777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-26 11:37:06.859826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-26 11:37:06.859845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.411 [2024-07-26 11:37:06.860083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.411 [2024-07-26 11:37:06.860325] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-26 11:37:06.860349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-26 11:37:06.860364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-26 11:37:06.863946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-26 11:37:06.873238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-26 11:37:06.873707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-26 11:37:06.873766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-26 11:37:06.873784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.411 [2024-07-26 11:37:06.874023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.411 [2024-07-26 11:37:06.874266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-26 11:37:06.874289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-26 11:37:06.874313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-26 11:37:06.877897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.411 [2024-07-26 11:37:06.887189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-26 11:37:06.887641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-26 11:37:06.887691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-26 11:37:06.887710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.411 [2024-07-26 11:37:06.887951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.411 [2024-07-26 11:37:06.888195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-26 11:37:06.888218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-26 11:37:06.888234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-26 11:37:06.891824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 [2024-07-26 11:37:06.901139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-26 11:37:06.901669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-26 11:37:06.901711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-26 11:37:06.901732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.411 [2024-07-26 11:37:06.901981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.411 [2024-07-26 11:37:06.902228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-26 11:37:06.902252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-26 11:37:06.902270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 Malloc0 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.411 [2024-07-26 11:37:06.905855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.411 [2024-07-26 11:37:06.915161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-26 11:37:06.915639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-07-26 11:37:06.915681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207b540 with addr=10.0.0.2, port=4420 00:29:11.411 [2024-07-26 11:37:06.915700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207b540 is same with the state(5) to be set 00:29:11.411 [2024-07-26 11:37:06.915941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b540 (9): Bad file descriptor 00:29:11.411 [2024-07-26 11:37:06.916184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:11.411 [2024-07-26 11:37:06.916208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:11.411 [2024-07-26 11:37:06.916223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:11.411 [2024-07-26 11:37:06.919804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.411 [2024-07-26 11:37:06.924502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.411 11:37:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2231669 00:29:11.411 [2024-07-26 11:37:06.929086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:11.411 [2024-07-26 11:37:06.963112] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:21.378 00:29:21.378 Latency(us) 00:29:21.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.378 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:21.378 Verification LBA range: start 0x0 length 0x4000 00:29:21.378 Nvme1n1 : 15.01 6099.28 23.83 8483.25 0.00 8751.45 922.36 23107.51 00:29:21.378 =================================================================================================================== 00:29:21.378 Total : 6099.28 23.83 8483.25 0.00 8751.45 922.36 23107.51 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.378 rmmod nvme_tcp 00:29:21.378 rmmod nvme_fabrics 00:29:21.378 rmmod nvme_keyring 00:29:21.378 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2232451 ']' 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2232451 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2232451 ']' 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2232451 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2232451 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2232451' 00:29:21.379 killing process with pid 2232451 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2232451 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2232451 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.379 11:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.926 11:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:23.926 00:29:23.926 real 0m23.422s 00:29:23.926 user 1m1.922s 00:29:23.926 sys 0m4.887s 00:29:23.926 11:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.926 11:37:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.926 ************************************ 00:29:23.926 END TEST nvmf_bdevperf 00:29:23.926 ************************************ 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.926 ************************************ 00:29:23.926 START TEST nvmf_target_disconnect 00:29:23.926 ************************************ 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:23.926 * Looking for test storage... 00:29:23.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.926 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.927 11:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.828 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.828 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:25.828 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:25.828 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:25.828 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:25.828 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:25.829 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:25.829 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:25.829 Found net devices under 0000:84:00.0: cvl_0_0 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:25.829 Found net devices under 0000:84:00.1: cvl_0_1 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.829 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:26.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:29:26.089 00:29:26.089 --- 10.0.0.2 ping statistics --- 00:29:26.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.089 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:29:26.089 00:29:26.089 --- 10.0.0.1 ping statistics --- 00:29:26.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.089 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.089 ************************************ 00:29:26.089 START TEST nvmf_target_disconnect_tc1 00:29:26.089 ************************************ 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.089 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.090 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:26.090 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:26.090 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:26.090 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.349 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.349 [2024-07-26 11:37:21.787952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.349 [2024-07-26 11:37:21.788035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148a790 with addr=10.0.0.2, port=4420 00:29:26.349 [2024-07-26 11:37:21.788071] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:26.349 [2024-07-26 11:37:21.788097] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:26.349 [2024-07-26 11:37:21.788112] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:26.349 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:26.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:26.349 Initializing NVMe Controllers 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:26.349 00:29:26.349 real 0m0.107s 00:29:26.349 user 0m0.055s 00:29:26.349 sys 0m0.050s 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:26.349 ************************************ 00:29:26.349 END TEST nvmf_target_disconnect_tc1 00:29:26.349 ************************************ 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:26.349 ************************************ 00:29:26.349 START TEST nvmf_target_disconnect_tc2 00:29:26.349 ************************************ 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2235626 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2235626 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2235626 ']' 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.349 11:37:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.349 [2024-07-26 11:37:21.919272] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:26.349 [2024-07-26 11:37:21.919360] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.349 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.349 [2024-07-26 11:37:22.000398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.607 [2024-07-26 11:37:22.143720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.607 [2024-07-26 11:37:22.143789] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.608 [2024-07-26 11:37:22.143809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.608 [2024-07-26 11:37:22.143826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.608 [2024-07-26 11:37:22.143840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.608 [2024-07-26 11:37:22.144269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:26.608 [2024-07-26 11:37:22.144348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:26.608 [2024-07-26 11:37:22.144403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:26.608 [2024-07-26 11:37:22.144407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 Malloc0 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 [2024-07-26 11:37:22.347530] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 [2024-07-26 11:37:22.375862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2235654 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.866 11:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:26.866 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.832 11:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2235626 00:29:28.832 11:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Read completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.832 Write completed with error (sct=0, sc=8) 00:29:28.832 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 [2024-07-26 11:37:24.402540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 [2024-07-26 11:37:24.402973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 [2024-07-26 11:37:24.403620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Write completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.833 Read completed with error (sct=0, sc=8) 00:29:28.833 starting I/O failed 00:29:28.834 Read completed with error (sct=0, sc=8) 00:29:28.834 starting I/O failed 00:29:28.834 Write completed with error (sct=0, sc=8) 00:29:28.834 starting I/O failed 00:29:28.834 Read completed with error (sct=0, sc=8) 00:29:28.834 starting I/O failed 00:29:28.834 Write completed with error (sct=0, sc=8) 00:29:28.834 starting I/O failed 00:29:28.834 Read completed with error (sct=0, sc=8) 00:29:28.834 starting I/O failed 00:29:28.834 [2024-07-26 11:37:24.404026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:28.834 [2024-07-26 11:37:24.404333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.404398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.404606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.404637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.404833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.404861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.405055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.405102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.405339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.405390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.405587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.405616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.405838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.405865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.406108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.406157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.406372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.406423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.406643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.406671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.406869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.406917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.407164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.407213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.407393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.407421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.407584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.407612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.407851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.407912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.408137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.408183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.408362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.408392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.408580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.408609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.408849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.408898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.409108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.409156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.409365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.409393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.409560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.409588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.409818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.409873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.410067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.410118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.410329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.410356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.410576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.410605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.410857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.410915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.411100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.411146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.411318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.411346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.411513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.411541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.411733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.411779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.411997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.412048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.412274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.412322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.412512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.412560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.412823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.834 [2024-07-26 11:37:24.412878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.834 qpair failed and we were unable to recover it. 00:29:28.834 [2024-07-26 11:37:24.413111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.413160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.413367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.413395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.413565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.413611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.413839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.413892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.414083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.414130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.414309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.414336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.414531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.414578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.414800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.414849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.415059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.415110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.415298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.415325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.415527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.415573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.415736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.415787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.415993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.416041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.416253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.416280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.416505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.416539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.416720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.416767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.416956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.417001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.417159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.417187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.417334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.417362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.417580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.417626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.417830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.417880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.418115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.418163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.418342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.418369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.418539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.418584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.418809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.418868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.419084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.419130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.419273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.419301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.419520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.419566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.419727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.419773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.419989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.420037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.420264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.420297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.420510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.420557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.420772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.420799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.421008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.421060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.421254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.421282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.421510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.421544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.421742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.421803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.422010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.835 [2024-07-26 11:37:24.422056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.835 qpair failed and we were unable to recover it. 00:29:28.835 [2024-07-26 11:37:24.422265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.422293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.422475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.422503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.422683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.422730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.422900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.422950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.423126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.423175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.423384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.423412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.423622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.423667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.423867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.423917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.424099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.424145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.424344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.424372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.424582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.424629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.424814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.424860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.425079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.425129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.425330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.425358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.425546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.425592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.425807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.425859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.426060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.426108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.426322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.426350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.426568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.426614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.426834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.426883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.427080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.427126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.427323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.427351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.427554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.427604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.427829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.427876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.428100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.428151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.428362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.428390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.428561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.428589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.428824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.428875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.429104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.429155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.429362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.429389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.429602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.429631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.429840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.429890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.430048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.430095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.430298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.430331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.430550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.430597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.430844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.430893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.431093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.431144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.836 [2024-07-26 11:37:24.431283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.836 [2024-07-26 11:37:24.431311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.836 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.431517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.431563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.431791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.431839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.432073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.432124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.432332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.432360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.432522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.432572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.432789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.432840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.433045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.433091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.433270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.433298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.433449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.433478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.433700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.433747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.433965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.434013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.434207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.434235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.434458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.434487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.434703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.434759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.434980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.435028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.435258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.435303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.435472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.435501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.435652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.435701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.435913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.435960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.837 qpair failed and we were unable to recover it. 00:29:28.837 [2024-07-26 11:37:24.436190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.837 [2024-07-26 11:37:24.436240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.436434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.436463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.436631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.436659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.436875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.436923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.437153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.437203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.437408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.437443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.437595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.437623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.437822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.437875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.438091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.438136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.438331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.438358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.438581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.438610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.438759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.438805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.439036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.439084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.439264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.439292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.439483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.439518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.439729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.439779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.439969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.440023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.440253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.440299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.440454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.440482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.440708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.440757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.440963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.441009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.441203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.441252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.441468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.441496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.441721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.441767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.441970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.442023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.442221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.442270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.442477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.442506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.442724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.442775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.442995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.443045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.838 qpair failed and we were unable to recover it. 00:29:28.838 [2024-07-26 11:37:24.443245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.838 [2024-07-26 11:37:24.443290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.443482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.443517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.443780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.443831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.443987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.444033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.444233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.444287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.444506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.444774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.444820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.445048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.445100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.445279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.445306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.445511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.445787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.445837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.446025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.446075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.446281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.446309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.446497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.446551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.446774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.446980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.447031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.447212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.447240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.447422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.447458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.447672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.447718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.447946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.447995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.448150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.448201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.448379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.448408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.448601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.448647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.448853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.448880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.449061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.449107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.449323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.449350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.449556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.449605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.449768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.449812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.450002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.450055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.450248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.450276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.450494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.450539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.450766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.450816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.451048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.451099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.451279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.451307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.451498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.451555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.451771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.451822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.452033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.452079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.839 [2024-07-26 11:37:24.452255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.839 [2024-07-26 11:37:24.452283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.839 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.452510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.452544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.452778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.452825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.453053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.453102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.453305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.453333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.453513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.453560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.453752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.453807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.454026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.454078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.454285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.454313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.454540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.454574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.454826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.454876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.455088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.455134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.455314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.455342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.455518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.455564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.455778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.455823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.456032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.456083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.456277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.456305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.456487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.456536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.456721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.456771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.456996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.457051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.457272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.457300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.457501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.457561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.457757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.457807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.458028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.458073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.458279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.458307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.458517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.458574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.458774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.458819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.459019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.459070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.459250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.459278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.459490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.459537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.459726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.459777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.459990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.460041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.460244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.460272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.460476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.460528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.460733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.460798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.461006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.461052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.461246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.461274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.461464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.461511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.840 qpair failed and we were unable to recover it. 00:29:28.840 [2024-07-26 11:37:24.461727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.840 [2024-07-26 11:37:24.461774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.461986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.462037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.462253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.462280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.462491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.462537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.462705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.462763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.462960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.463010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.463213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.463241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.463457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.463485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.463681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.463734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.463891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.463936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.464138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.464188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.464352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.464380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.464602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.464648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.464889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.464938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.465157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.465205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.465380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.465408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.465611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.465658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.465849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.465897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.466087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.466133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.466310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.466337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.466560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.466606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.466810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.466856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.467085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.467135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.467331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.467359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.467570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.467617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.467816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.467867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.468085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.468137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.468290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.468319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.468552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.468601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.468835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.468885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.469092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.469137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.469316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.469344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.469493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.469552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.469744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.469790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.469984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.470032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.470220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.470248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.470471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.470499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.470713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.841 [2024-07-26 11:37:24.470765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.841 qpair failed and we were unable to recover it. 00:29:28.841 [2024-07-26 11:37:24.470978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.471030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.471243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.471289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.471492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.471543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.471748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.471804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.471984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.472029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.472243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.472293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.472510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.472557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.472744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.472790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.472990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.473039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.473193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.473221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.473425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.473460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.473641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.473694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.473886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.473935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.474092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.474138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.474330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.474358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.474587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.474635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.474812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.474858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.475076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.475127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.475330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.475358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.475541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.475587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.475794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.475844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.476036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.476090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.476296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.476324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.476545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.476595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.476813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.476863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.477066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.477113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.477299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.477327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.477555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.477602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.477782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.477828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.478052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.478102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.478298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.478325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.842 qpair failed and we were unable to recover it. 00:29:28.842 [2024-07-26 11:37:24.478554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.842 [2024-07-26 11:37:24.478601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.478828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.478880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.479101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.479152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.479303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.479331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.479511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.479557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.479743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.479799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.480007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.480054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.480277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.480309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.480533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.480584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.480801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.480847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.481012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.481061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.481228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.481259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.481471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.481499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.481694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.481730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.481960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.482008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.482187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.482233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.482370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.482398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.482606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.482654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.482844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.483082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.483139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.483322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.483350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.483565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.483611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.483818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.483870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.484067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.484118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.484269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.484297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.484482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.484538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.484752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.484798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.485008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.485053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.485227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.485255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.485457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.485485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.485701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.485747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.485977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.486029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.486218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.486274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.486507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.486554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.486790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.486842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.487076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.487124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.487272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.487300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.487492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.843 [2024-07-26 11:37:24.487526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.843 qpair failed and we were unable to recover it. 00:29:28.843 [2024-07-26 11:37:24.487766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.844 [2024-07-26 11:37:24.487830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:28.844 qpair failed and we were unable to recover it. 00:29:29.120 [2024-07-26 11:37:24.488022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.120 [2024-07-26 11:37:24.488068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.120 qpair failed and we were unable to recover it. 00:29:29.120 [2024-07-26 11:37:24.488254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.120 [2024-07-26 11:37:24.488282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.120 qpair failed and we were unable to recover it. 00:29:29.120 [2024-07-26 11:37:24.488523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.120 [2024-07-26 11:37:24.488569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.488728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.488775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.488955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.489001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.489169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.489196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.489374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.489402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.489609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.489642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.489877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.489938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.490127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.490177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.490380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.490408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.490639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.490685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.490901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.490948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.491134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.491179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.491393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.491421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.491639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.491685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.491884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.491934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.492127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.492175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.492359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.492387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.492572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.492619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.492803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.492853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.493042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.493088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.493299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.493326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.493536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.493565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.493746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.493793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.494019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.494069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.494270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.494297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.494503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.494551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.494737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.494785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.494977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.495029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.495207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.495235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.495409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.495445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.495598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.495645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.495843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.495887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.496115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.496166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.496380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.496408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.496578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.496630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.496856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.496906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.497140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.497188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-07-26 11:37:24.497400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.121 [2024-07-26 11:37:24.497436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.497618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.497646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.497831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.497880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.498080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.498125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.498330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.498358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.498540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.498569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.498795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.498842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.499070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.499120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.499281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.499309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.499514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.499577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.499770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.499821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.500024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.500075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.500294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.500322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.500501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.500552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.500768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.500818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.501025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.501071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.501277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.501305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.501523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.501572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.501766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.501812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.502011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.502060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.502234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.502417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.502453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.502687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.502737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.502957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.503005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.503212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.503258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.503444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.503473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.503679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.503729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.503957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.504003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.504203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.504253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.504485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.504533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.504741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.504769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.504978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.505030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.505248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.505299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.505519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.505548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.505708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.505760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.505988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.506038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.506190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.506236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.506458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.506486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-07-26 11:37:24.506692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.122 [2024-07-26 11:37:24.506724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.506938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.506983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.507148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.507199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.507378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.507405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.507619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.507648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.507826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.507876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.508062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.508111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.508340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.508368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.508567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.508596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.508782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.508832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.509048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.509093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.509270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.509297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.509508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.509559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.509749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.509796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.509990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.510038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.510244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.510272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.510485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.510513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.510696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.510757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.510988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.511037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.511225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.511272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.511479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.511507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.511666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.511714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.511881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.511927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.512125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.512176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.512356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.512384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.512576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.512604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.512824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.512876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.513072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.513120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.513272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.513299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.513491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.513525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.513768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.513817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.514009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.514055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.514235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.514263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.514483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.514512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.514713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.514760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.514970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.515019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.515248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.515296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.123 [2024-07-26 11:37:24.515489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.123 [2024-07-26 11:37:24.515538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.123 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.515761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.515811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.516039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.516090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.516300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.516328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.516520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.516569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.516761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.516810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.517013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.517060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.517250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.517278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.517506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.517534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.517690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.517736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.517963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.518014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.518223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.518251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.518461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.518489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.518686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.518736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.518934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.518978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.519165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.519210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.519386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.519413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.519631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.519688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.519885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.519930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.520117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.520169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.520383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.520410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.520579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.520625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.520843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.520892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.521084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.521133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.521319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.521347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.521559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.521587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.521792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.521843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.522028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.522074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.522280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.522308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.522521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.522567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.522787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.522833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.523024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.523079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.523295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.523323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.523490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.523540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.523739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.523788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.524006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.524055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.524241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.524269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.524451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.524479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-07-26 11:37:24.524689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.124 [2024-07-26 11:37:24.524734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.524940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.524984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.525217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.525267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.525481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.525510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.525703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.525750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.525972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.526021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.526222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.526272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.526491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.526538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.526709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.526759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.526983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.527033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.527254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.527300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.527516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.527568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.527754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.527803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.528014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.528061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.528232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.528259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.528496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.528525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.528760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.528807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.529028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.529076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.529277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.529327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.529534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.529563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.529757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.529806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.530010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.530058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.530265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.530310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.530496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.530550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.530736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.530785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.530975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.531018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.531202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.531253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.531421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.531468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.531676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.531720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.531953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.125 [2024-07-26 11:37:24.532002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-07-26 11:37:24.532220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.532271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.532484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.532532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.532776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.532831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.533058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.533110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.533334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.533385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.533582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.533610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.533783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.533850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.534058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.534102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.534314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.534342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.534489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.534517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.534737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.534784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.534987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.535036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.535227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.535277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.535485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.535532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.535699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.535749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.535972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.536020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.536234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.536262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.536441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.536469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.536669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.536696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.536886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.536931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.537151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.537199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.537380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.537408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.537638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.537666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.537879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.537923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.538138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.538187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.538399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.538443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.538589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.538616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.538808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.538857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.539036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.539082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.539297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.539346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.539511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.539544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.539733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.539784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.540009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.540059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.540237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.540265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.540475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.540504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.540716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.540765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.540966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.541014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-07-26 11:37:24.541230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.126 [2024-07-26 11:37:24.541275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.541494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.541542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.541764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.541815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.541975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.542020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.542197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.542249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.542466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.542494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.542672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.542718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.542908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.542959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.543190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.543243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.543419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.543455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.543627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.543655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.543869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.543920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.544072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.544119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.544295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.544323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.544500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.544556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.544738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.544784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.545002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.545053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.545228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.545255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.545457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.545485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.545682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.545733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.545958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.546009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.546236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.546282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.546497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.546525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.546716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.546764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.546970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.547016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.547234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.547285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.547498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.547547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.547719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.547764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.547986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.548037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.548251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.548278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.548488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.548534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.548746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.548798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.548990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.549040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.549220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.549247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.549453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.549481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.549708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.549763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.549951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.549997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.550196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.550246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-07-26 11:37:24.550455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.127 [2024-07-26 11:37:24.550483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.550694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.550740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.550957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.551008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.551174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.551224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.551402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.551436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.551641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.551669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.551892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.551940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.552149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.552196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.552405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.552441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.552615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.552643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.552856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.552902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.553135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.553185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.553342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.553370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.553572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.553600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.553841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.553892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.554079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.554126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.554290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.554318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.554510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.554544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.554789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.554839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.555065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.555111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.555314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.555341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.555522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.555567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.555779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.555824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.556019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.556071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.556249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.556281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.556459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.556506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.556721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.556770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.556956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.557006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.557211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.557238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.557444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.557473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.557675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.557720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.557928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.557974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.558166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.558217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.558392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.558420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.558658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.558704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.558929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.558977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.559178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.559227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.559402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.128 [2024-07-26 11:37:24.559437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.128 qpair failed and we were unable to recover it. 00:29:29.128 [2024-07-26 11:37:24.559610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.559655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.559868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.559917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.560100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.560147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.560326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.560354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.560514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.560542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.560733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.560781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.561006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.561057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.561239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.561291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.561484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.561533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.561778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.561825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.562049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.562099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.562309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.562337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.562516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.562565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.562761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.562813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.563011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.563057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.563239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.563266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.563447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.563476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.563648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.563695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.563915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.563965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.564162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.564210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.564418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.564453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.564662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.564689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.564877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.564926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.565134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.565181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.565338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.565365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.565511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.565539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.565706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.565752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.565974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.566028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.566226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.566275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.566499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.566528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.566756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.566813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.567004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.567055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.567258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.567286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.567505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.567556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.567793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.567844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.568071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.568116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.568320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.568348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.568557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.129 [2024-07-26 11:37:24.568608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.129 qpair failed and we were unable to recover it. 00:29:29.129 [2024-07-26 11:37:24.568796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.568841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.569063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.569112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.569343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.569370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.569548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.569594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.569830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.569879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.570060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.570109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.570314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.570341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.570518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.570565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.570785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.570836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.571025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.571071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.571278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.571306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.571511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.571558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.571770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.571817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.571985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.572034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.572210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.572238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.572413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.572449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.572628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.572683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.572855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.572905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.573111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.573157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.573363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.573390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.573562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.573609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.573791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.573836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.574051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.574103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.574318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.574346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.574562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.574590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.574782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.574832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.575027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.575078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.575256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.575283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.575486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.575532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.575747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.575794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.576012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.576057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.576275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.130 [2024-07-26 11:37:24.576326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.130 qpair failed and we were unable to recover it. 00:29:29.130 [2024-07-26 11:37:24.576541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.576587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.576801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.576846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.577045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.577095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.577304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.577331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.577529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.577576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.577787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.577837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.578053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.578103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.578285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.578313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.578503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.578557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.578789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.578838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.579055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.579101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.579315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.579343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.579579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.579627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.579836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.579883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.580071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.580121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.580321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.580349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.580544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.580590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.580777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.580827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.581094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.581122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.581288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.581316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.581513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.581564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.581723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.581772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.581952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.581997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.582196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.582246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.582391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.582419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.582638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.582671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.582851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.582879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.583063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.583112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.583317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.583345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.583535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.583581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.583811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.583868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.584043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.584089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.584261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.584289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.584460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.584504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.584716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.584763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.584987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.585038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.585248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.585276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.131 [2024-07-26 11:37:24.585450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.131 [2024-07-26 11:37:24.585495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.131 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.585717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.585766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.585969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.586020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.586206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.586234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.586441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.586475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.586718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.586795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.587022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.587068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.587274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.587322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.587470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.587498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.587686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.587733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.587950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.588000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.588218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.588266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.588451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.588479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.588688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.588753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.588972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.589021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.589243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.589292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.589509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.589555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.589783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.589834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.590052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.590097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.590314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.590342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.590556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.590601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.590785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.590830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.591024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.591075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.591285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.591312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.591494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.591542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.591775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.591828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.592051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.592099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.592280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.592307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.592487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.592549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.592715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.592760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.592933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.592978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.593172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.593223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.593426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.593461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.593627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.593672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.593872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.593921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.594146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.594195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.594422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.594457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.594644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.594692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.132 qpair failed and we were unable to recover it. 00:29:29.132 [2024-07-26 11:37:24.594910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.132 [2024-07-26 11:37:24.594962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.595167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.595213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.595446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.595474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.595647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.595696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.595917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.595963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.596201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.596250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.596501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.596553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.596778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.596806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.596989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.597037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.597247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.597299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.597480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.597508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.597703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.597752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.597914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.597964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.598184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.598229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.598415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.598449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.598631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.598659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.598868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.598913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.599139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.599188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.599397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.599444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.599597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.599625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.599865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.599930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.600124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.600172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.600350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.600377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.600570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.600599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.600838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.600889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.601101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.601146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.601323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.601351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.601542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.601570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.601727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.601773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.602014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.602064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.602299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.602348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.602575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.602622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.602838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.602888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.603106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.603157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.603370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.603398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.603636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.603665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.603888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.603939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.604119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.604165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.133 qpair failed and we were unable to recover it. 00:29:29.133 [2024-07-26 11:37:24.604368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.133 [2024-07-26 11:37:24.604396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.604600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.604647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.604814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.604860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.605025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.605072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.605253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.605281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.605471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.605527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.605762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.605813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.606035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.606083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.606261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.606289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.606501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.606557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.606785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.606837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.607046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.607090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.607266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.607294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.607518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.607568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.607806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.607851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.608054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.608110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.608313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.608341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.608577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.608623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.608856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.608911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.609141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.609192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.609370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.609398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.609624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.609672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.609855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.609904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.610129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.610175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.610378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.610406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.610595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.610641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.610859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.610904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.611100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.611150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.611323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.611351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.611532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.611561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.611759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.611811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.612006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.612055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.612237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.612283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.612518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.612552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.612810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.612872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.613066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.613111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.134 qpair failed and we were unable to recover it. 00:29:29.134 [2024-07-26 11:37:24.613312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.134 [2024-07-26 11:37:24.613340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.613559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.613608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.613831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.613883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.614125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.614176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.614362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.614390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.614586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.614633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.614864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.614917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.615113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.615164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.615369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.615396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.615593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.615639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.615827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.615877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.616098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.616143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.616346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.616379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.616609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.616655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.616870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.616916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.617055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.617105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.617290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.617317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.617494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.617542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.617755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.617815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.618005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.618057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.618265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.618292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.618503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.618551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.618800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.618847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.619074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.619120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.619335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.619552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.619598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.619798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.619843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.620074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.620122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.620351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.620379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.620547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.620576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.620773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.620827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.621061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.621111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.621326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.621354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.621518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.135 [2024-07-26 11:37:24.621565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.135 qpair failed and we were unable to recover it. 00:29:29.135 [2024-07-26 11:37:24.621786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.621835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.622031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.622077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.622263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.622292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.622498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.622547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.622737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.622783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.622936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.622986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.623170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.623220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.623371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.623398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.623593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.623639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.623818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.623846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.624024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.624052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.624199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.624226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.624406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.624619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.624664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.624897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.624945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.625166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.625216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.625393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.625421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.625649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.625695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.625845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.625896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.626087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.626137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.626353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.626381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.626597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.626625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.626841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.626886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.627050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.627099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.627232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.627260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.627444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.627472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.627703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.627761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.627947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.627997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.628217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.628262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.628448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.628476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.628679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.628706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.628935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.628980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.629214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.629265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.629470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.629498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.629678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.629706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.629912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.629963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.630169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.630219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.630405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.136 [2024-07-26 11:37:24.630443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.136 qpair failed and we were unable to recover it. 00:29:29.136 [2024-07-26 11:37:24.630655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.630683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.630861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.630909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.631116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.631167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.631309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.631336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.631537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.631566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.631750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.631805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.632005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.632056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.632263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.632309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.632519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.632569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.632761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.632811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.633030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.633081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.633255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.633283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.633492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.633537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.633764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.633812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.634004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.634056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.634277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.634304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.634493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.634539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.634728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.634783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.635013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.635063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.635268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.635296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.635485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.635535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.635745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.635802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.636021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.636072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.636251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.636278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.636465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.636511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.636739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.636792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.637015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.637064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.637243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.637270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.637477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.637505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.637696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.637746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.637975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.638025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.638198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.638244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.638464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.638492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.638719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.638771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.638953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.639003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.639212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.639489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.639517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.639752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.137 [2024-07-26 11:37:24.639802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.137 qpair failed and we were unable to recover it. 00:29:29.137 [2024-07-26 11:37:24.639962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.640014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.640232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.640277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.640506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.640534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.640742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.640792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.640988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.641036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.641240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.641285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.641469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.641515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.641698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.641748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.641937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.641987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.642177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.642222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.642394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.642422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.642577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.642628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.642819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.642869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.643093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.643139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.643350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.643377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.643562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.643608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.643833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.643884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.644068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.644298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.644326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.644541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.644587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.644827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.644887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.645071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.645117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.645329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.645356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.645576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.645622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.645851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.645899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.646103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.646151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.646364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.646391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.646610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.646638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.646840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.646889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.647076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.647122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.647302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.647330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.647503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.647550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.647769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.647820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.648039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.648086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.648286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.648313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.648522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.648568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.648792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.648843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.138 [2024-07-26 11:37:24.648974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.138 [2024-07-26 11:37:24.649019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.138 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.649241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.649275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.649492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.649541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.649706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.649752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.649964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.649991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.650228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.650275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.650475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.650503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.650730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.650779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.650964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.651011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.651191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.651237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.651412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.651449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.651651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.651712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.651917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.651962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.652178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.652224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.652405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.652448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.652665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.652711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.652875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.652922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.653139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.653187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.653366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.653393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.653591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.653636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.653840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.653886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.654035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.654086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.654277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.654326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.654509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.654557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.654774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.654821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.655031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.655076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.655282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.655309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.655498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.655548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.655708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.655751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.655966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.656013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.656198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.656442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.656470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.656638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.139 [2024-07-26 11:37:24.656682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.139 qpair failed and we were unable to recover it. 00:29:29.139 [2024-07-26 11:37:24.656872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.656918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.657089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.657141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.657356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.657383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.657605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.657651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.657832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.657878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.658068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.658122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.658344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.658372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.658555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.658583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.658812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.658860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.659089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.659146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.659330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.659357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.659544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.659572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.659757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.659804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.660035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.660079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.660268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.660295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.660482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.660531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.660749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.660796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.661019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.661069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.661253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.661280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.661464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.661511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.661728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.661774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.661931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.662197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.662252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.662486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.662515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.662728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.662773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.662962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.663007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.663192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.663239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.663443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.663489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.663702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.663740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.664030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.664065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.664325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.664380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.664600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.664629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.664822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.664868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.665095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.665148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.665359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.665387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.665607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.665636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.665803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.665854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.666096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.140 [2024-07-26 11:37:24.666151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.140 qpair failed and we were unable to recover it. 00:29:29.140 [2024-07-26 11:37:24.666334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.666362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.666569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.666597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.666792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.666839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.667029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.667079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.667306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.667358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.667582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.667629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.667814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.667860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.668046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.668104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.668281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.668308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.668493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.668541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.668731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.668781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.669002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.669049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.669257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.669285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.669467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.669515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.669732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.669779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.669981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.670029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.670190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.670218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.670396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.670423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.670614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.670661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.670882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.670934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.671102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.671154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.671357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.671385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.671598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.671645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.671854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.671905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.672135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.672184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.672392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.672420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.672588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.672635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.672797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.672848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.673043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.673093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.673297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.673325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.673537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.673584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.673809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.673860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.674079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.674133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.674326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.674354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.674582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.674629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.674819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.674871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.675105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.675157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.675341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.141 [2024-07-26 11:37:24.675368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.141 qpair failed and we were unable to recover it. 00:29:29.141 [2024-07-26 11:37:24.675529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.675557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.675765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.675827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.676050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.676102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.676301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.676331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.676522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.676569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.676757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.676814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.677034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.677083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.677278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.677305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.677502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.677548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.677726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.677788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.677989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.678040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.678215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.678243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.678394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.678421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.678648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.678695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.678910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.678961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.679182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.679228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.679398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.679426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.679612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.679660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.679861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.679912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.680135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.680182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.680382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.680410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.680608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.680654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.680840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.680885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.681107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.681153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.681319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.681347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.681572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.681600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.681799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.681844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.681997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.682043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.682230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.682284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.682516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.682563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.682786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.682830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.683053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.683097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.683282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.683309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.683493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.683542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.683743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.683807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.684021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.684067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.684269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.684297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.142 [2024-07-26 11:37:24.684453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.142 [2024-07-26 11:37:24.684481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.142 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.684669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.684717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.684909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.684964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.685137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.685187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.685379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.685407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.685657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.685716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.685883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.685929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.686100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5beb00 is same with the state(5) to be set 00:29:29.143 [2024-07-26 11:37:24.686486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.686531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.686724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.686754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.687031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.687096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.687389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.687418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.687641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.687688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.687966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.688030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.688289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.688355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.688663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.688712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.688948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.688984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.689198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.689262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.689535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.689565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.689801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.689836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.690146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.690210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.690484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.690514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.690684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.690731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.690960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.690988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.691211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.691275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.691557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.691587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.691767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.691796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.692000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.692064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.692366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.692445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.692701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.692730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.693011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.693077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.693369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.693445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.693677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.693715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.694019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.694084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.694398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.694486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.694699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.694728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.695022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.695087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.695391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.695492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.143 [2024-07-26 11:37:24.695698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.143 [2024-07-26 11:37:24.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.143 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.695942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.696008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.696278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.696341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.696632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.696661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.696878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.696942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.697218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.697282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.697570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.697599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.697796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.697859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.698150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.698215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.698465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.698495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.698669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.698717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.698988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.699023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.699231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.699260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.699450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.699517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.699748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.699783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.700063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.700092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.700330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.700394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.700666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.700695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.700890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.700919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.701073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.701148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.701440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.701490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.701649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.701677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.701903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.701968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.702247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.702282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.702499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.702530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.702761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.702826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.703054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.703089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.703283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.703312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.703517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.703547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.703736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.703771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.703978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.704006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.704237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.704301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.144 [2024-07-26 11:37:24.704617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.144 [2024-07-26 11:37:24.704646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.144 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.704864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.704892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.705189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.705264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.705510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.705539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.705754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.705783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.706027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.706091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.706394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.706500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.706720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.706748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.706946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.707011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.707280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.707344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.707624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.707653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.707841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.707906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.708170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.708205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.708365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.708393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.708587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.708616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.708878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.708914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.709167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.709195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.709383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.709465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.709710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.709759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.710003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.710032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.710255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.710318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.710587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.710617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.710759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.710788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.711012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.711076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.711330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.711365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.711531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.711785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.711849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.712130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.712165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.712409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.712451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.712649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.712715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.713001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.713036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.713283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.713312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.713514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.713544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.713738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.713773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.713973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.714002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.714187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.714251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.714537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.714567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.714713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.145 [2024-07-26 11:37:24.714742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.145 qpair failed and we were unable to recover it. 00:29:29.145 [2024-07-26 11:37:24.714927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.714992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.715247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.715282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.715479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.715508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.715706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.715770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.716022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.716062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.716261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.716290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.716459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.716522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.716662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.716691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.716867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.716896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.717047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.717111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.717417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.717496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.717687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.717716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.717940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.718004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.718310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.718375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.718674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.718704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.718919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.718984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.719231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.719266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.719460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.719490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.719724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.719788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.720060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.720095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.720300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.720328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.720480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.720533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.720746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.720782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.720986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.721015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.721242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.721305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.721603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.721633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.721815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.721844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.722042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.722106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.722385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.722421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.722614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.722642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.722881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.722945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.723241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.723276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.723532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.723561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.723761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.723826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.724111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.724146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.724362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.724390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.724631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.724660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.146 qpair failed and we were unable to recover it. 00:29:29.146 [2024-07-26 11:37:24.724948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.146 [2024-07-26 11:37:24.724982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.725189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.725217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.725406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.725499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.725679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.725724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.725942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.725970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.726188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.726252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.726520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.726550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.726754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.726787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.727050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.727114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.727385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.727480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.727664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.727692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.727898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.727963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.728240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.728274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.728480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.728509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.728726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.728789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.729067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.729102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.729320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.729348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.729540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.729569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.729770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.729804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.730026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.730055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.730288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.730353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.730666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.730695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.730850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.730879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.731095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.731161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.731445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.731495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.731710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.731739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.731985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.732050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.732300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.732335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.732531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.732560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.732755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.732819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.733107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.733142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.733384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.733412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.733601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.733630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.733812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.733847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.734076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.734105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.734323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.734387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.734646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.734675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.147 [2024-07-26 11:37:24.734876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.147 [2024-07-26 11:37:24.734905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.147 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.735127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.735191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.735456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.735513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.735704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.735733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.735946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.736011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.736280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.736344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.736610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.736640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.736825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.736890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.737161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.737196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.737393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.737476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.737742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.737816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.738113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.738148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.738420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.738467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.738642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.738697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.738943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.738978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.739191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.739219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.739383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.739467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.739688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.739734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.739905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.739934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.740133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.740197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.740495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.740524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.740750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.740778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.741064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.741128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.741403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.741446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.741677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.741706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.741920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.741983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.742234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.742268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.742491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.742520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.742712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.742776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.743065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.743100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.743347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.743376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.743546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.743575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.743790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.743826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.743993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.744021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.744211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.744275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.744566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.744596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.148 [2024-07-26 11:37:24.744766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.148 [2024-07-26 11:37:24.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.148 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.745018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.745082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.745393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.745485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.745642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.745670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.745831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.745895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.746185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.746220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.746420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.746456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.746662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.746734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.747013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.747048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.747254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.747283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.747474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.747520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.747692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.747740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.747933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.747962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.748153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.748218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.748501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.748537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.748724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.748753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.748949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.749013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.749294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.749329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.749589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.749618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.749785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.749849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.750139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.750174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.750397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.750644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.750705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.750955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.750990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.751212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.751241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.751416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.751514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.751737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.751772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.752024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.752052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.752239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.752305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.752558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.752587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.752754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.752782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.752991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.753055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.753335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.753370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.753563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.149 [2024-07-26 11:37:24.753592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.149 qpair failed and we were unable to recover it. 00:29:29.149 [2024-07-26 11:37:24.753797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.753862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.754142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.754176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.754358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.754387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.754600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.754630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.754832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.754867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.755061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.755089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.755277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.755341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.755610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.755640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.755852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.755880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.756146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.756211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.756463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.756511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.756731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.756760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.757063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.757127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.757382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.757637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.757667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.757862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.757926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.758209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.758244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.758475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.758503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.758649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.758717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.758999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.759033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.759249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.759282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.759515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.759544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.759737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.759772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.759967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.759995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.760200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.760264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.760540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.760569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.760785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.760813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.761025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.761058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.761238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.761271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.761478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.761507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.761709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.761773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.762004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.762037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.762250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.762278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.762506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.762536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.762732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.762783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.763014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.763043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.763200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.150 [2024-07-26 11:37:24.763233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.150 qpair failed and we were unable to recover it. 00:29:29.150 [2024-07-26 11:37:24.763421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.763465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.763692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.763720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.763929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.763993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.764276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.764311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.764567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.764596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.764816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.764880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.765151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.765184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.765403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.765443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.765647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.765675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.765878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.765911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.766130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.766159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.766312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.766345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.766568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.766598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.766805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.766833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.767058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.767133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.767292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.767325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.767539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.767568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.767791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.767856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.768155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.768189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.768386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.768414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.768612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.768640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.768909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.768944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.769175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.769203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.769395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.769487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.769702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.769750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.769965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.769993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.770229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.770293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.770583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.770612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.770802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.770831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.771066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.771131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.771380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.771414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.771608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.771636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.428 qpair failed and we were unable to recover it. 00:29:29.428 [2024-07-26 11:37:24.771841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.428 [2024-07-26 11:37:24.771905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.772164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.772198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.772434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.772463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.772701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.772766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.773021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.773055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.773278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.773307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.773528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.773557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.773770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.773805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.773973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.774001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.774186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.774249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.774506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.774535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.774714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.774742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.774952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.775016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.775311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.775663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.775692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.775874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.775938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.776224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.776259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.776441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.776470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.776653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.776722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.777000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.777034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.777249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.777278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.777478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.777528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.777688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.777735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.777954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.777983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.778175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.778239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.778530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.778559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.778740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.778769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.779005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.779070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.779335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.779369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.779602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.779632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.779859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.779923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.782088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.782175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.782447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.782478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.782648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.782709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.782952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.782988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.783167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.783196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.783341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.783408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.429 qpair failed and we were unable to recover it. 00:29:29.429 [2024-07-26 11:37:24.783673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.429 [2024-07-26 11:37:24.783703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.783895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.783925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.784110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.784174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.784445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.784500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.784709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.784738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.784977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.785042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.785305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.785341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.785535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.785564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.785738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.785814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.786095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.786129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.786350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.786379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.786579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.786609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.786829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.786878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.787144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.787172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.787382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.787466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.787637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.787666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.787831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.787860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.788021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.788086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.788385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.788468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.788686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.788715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.788907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.788971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.789263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.789299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.789480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.789510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.789702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.789765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.790024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.790058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.790267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.790302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.790526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.790555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.790762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.790797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.791029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.791058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.791281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.791345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.791645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.791674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.791825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.791853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.792053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.792117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.792396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.792440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.792638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.792666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.792862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.792927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.793206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.793241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.430 [2024-07-26 11:37:24.793461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.430 [2024-07-26 11:37:24.793514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.430 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.793661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.793689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.793865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.793908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.794101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.794129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.794332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.794397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.794668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.794721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.794957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.794986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.795225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.795290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.795564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.795593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.795785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.795814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.796071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.796136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.796419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.796513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.796700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.796728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.796932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.796997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.797286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.797322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.797579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.797608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.797821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.797886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.798180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.798215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.798404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.798438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.798595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.798624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.798839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.798874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.799077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.799106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.799298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.799362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.799597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.799626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.799847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.799881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.800165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.800231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.800496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.800525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.800664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.800692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.800843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.800908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.801137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.801172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.801379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.801408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.801562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.801591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.801800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.801835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.802090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.802119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.802326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.802390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.802637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.802667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.802853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.802882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.803099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.431 [2024-07-26 11:37:24.803164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.431 qpair failed and we were unable to recover it. 00:29:29.431 [2024-07-26 11:37:24.803465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.803512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.803661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.803690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.803882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.803947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.804238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.804273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.804498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.804528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.804697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.804762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.805045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.805080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.805272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.805301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.805485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.805537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.805681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.805733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.805951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.805980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.806181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.806246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.806512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.806541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.806701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.806730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.806961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.807024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.807327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.807362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.807583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.807613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.807828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.807892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.808152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.808187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.808355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.808384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.808564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.808594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.808751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.808786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.808956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.808985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.809169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.809232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.809525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.809555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.809706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.809735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.809928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.810015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.810310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.810345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.810541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.810570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.432 qpair failed and we were unable to recover it. 00:29:29.432 [2024-07-26 11:37:24.810706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.432 [2024-07-26 11:37:24.810735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.810986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.811021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.811192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.811221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.811449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.811523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.811664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.811710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.811875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.811903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.812131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.812196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.812503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.812533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.812698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.812727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.812922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.812992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.813180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.813213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.813387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.813415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.813572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.813601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.813823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.813857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.814056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.814085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.814279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.814312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.814589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.814618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.814841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.814870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.815111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.815145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.815319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.815354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.815555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.815585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.815739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.815815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.816061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.816097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.816287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.816316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.816532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.816597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.816891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.816927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.817135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.817163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.817411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.817492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.817713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.817742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.817918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.817947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.818129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.818157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.818302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.818330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.818523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.818554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.818730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.818795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.819048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.819084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.819275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.819303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.819504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.433 [2024-07-26 11:37:24.819534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.433 qpair failed and we were unable to recover it. 00:29:29.433 [2024-07-26 11:37:24.819682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.819733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.819921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.819950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.820138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.820202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.820478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.820522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.820674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.820702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.820908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.820971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.821232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.821268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.821469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.821498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.821640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.821669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.821927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.821962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.822179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.822207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.822371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.822451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.822655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.822683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.822910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.822938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.823136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.823199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.823455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.823503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.823657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.823684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.823920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.823984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.824216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.824251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.824463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.824493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.824662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.824731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.824996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.825032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.825204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.825231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.825397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.825476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.825647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.825675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.825889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.825917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.826134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.826198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.826491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.826521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.826673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.826702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.826935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.826999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.827282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.827318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.827530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.827559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.827735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.827800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.828069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.828105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.828302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.828331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.828534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.828563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.828792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.828827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.434 qpair failed and we were unable to recover it. 00:29:29.434 [2024-07-26 11:37:24.829080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.434 [2024-07-26 11:37:24.829109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.829314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.829378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.829602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.829631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.829791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.829824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.830008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.830072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.830352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.830388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.830619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.830647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.830847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.830911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.831171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.831212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.831414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.831451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.831581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.831610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.831797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.831832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.832048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.832077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.832272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.832336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.832578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.832608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.832760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.832789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.833008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.833073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.833374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.833409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.833620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.833649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.833856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.833920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.834195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.834230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.834443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.834482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.834621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.834649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.834925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.834960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.835181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.835210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.835370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.835451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.835626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.835655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.835835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.835863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.836077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.836142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.836408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.836505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.836673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.836702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.836895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.836960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.837213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.837248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.837441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.837471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.837635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.837702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.837987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.838022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.838237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.838266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.435 [2024-07-26 11:37:24.838461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.435 [2024-07-26 11:37:24.838526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.435 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.838659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.838687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.838878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.838906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.839095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.839160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.839414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.839458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.839612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.839640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.839792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.839864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.840164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.840200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.840434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.840463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.840605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.840633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.840840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.840875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.841076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.841105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.841288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.841352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.841621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.841651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.841847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.841875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.842080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.842143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.842359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.842394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.842587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.842616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.842847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.842912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.843131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.843166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.843371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.843452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.843622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.843650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.843853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.843888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.844085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.844114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.844295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.844357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.844617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.844646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.844808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.844844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.845038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.845101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.845384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.845419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.845605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.845634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.845834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.845898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.846151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.846186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.846386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.846414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.846576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.846604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.846807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.846842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.847021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.847056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.847243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.847307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.847558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.436 [2024-07-26 11:37:24.847587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.436 qpair failed and we were unable to recover it. 00:29:29.436 [2024-07-26 11:37:24.847782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.847811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.848052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.848116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.848377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.848501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.848643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.848672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.848903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.848967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.849244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.849279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.849484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.849513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.849658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.849728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.849992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.850033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.850288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.850316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.850515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.850543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.850703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.850751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.850951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.850980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.851160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.851223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.851448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.851496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.851641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.851670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.851902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.851966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.852244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.852278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.852509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.852539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.852706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.852770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.853032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.853066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.853239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.853267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.853502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.853531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.853700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.853729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.853918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.853946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.854153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.854217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.854501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.854531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.854694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.854722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.854948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.855011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.855265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.855301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.855518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.855547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.437 [2024-07-26 11:37:24.855699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.437 [2024-07-26 11:37:24.855762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.437 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.856056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.856091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.856294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.856322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.856515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.856562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.856705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.856753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.856952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.856981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.857198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.857260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.857523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.857552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.857712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.857741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.857904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.857967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.858245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.858281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.858517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.858546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.858768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.858831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.859101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.859136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.859334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.859362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.859521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.859550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.859711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.859747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.859962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.859996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.860183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.860248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.860526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.860555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.860718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.860746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.860967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.861012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.861265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.861300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.861504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.861534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.861669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.861730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.861992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.862027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.862253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.862281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.862488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.862548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.862685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.862713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.862904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.862933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.863118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.863182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.863481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.863528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.863663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.863698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.863929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.863993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.864232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.864268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.864467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.864496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.864639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.864667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.864960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.438 [2024-07-26 11:37:24.864995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.438 qpair failed and we were unable to recover it. 00:29:29.438 [2024-07-26 11:37:24.865185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.865213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.865458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.865527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.865704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.865751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.865938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.865966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.866181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.866244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.866517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.866547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.866692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.866720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.866893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.866956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.867217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.867252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.867443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.867471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.867634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.867662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.867963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.867999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.868193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.868222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.868415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.868503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.868645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.868674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.868808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.868836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.869038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.869101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.869379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.869413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.869602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.869631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.869796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.869872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.870091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.870126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.870306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.870335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.870556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.870586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.870759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.870793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.870948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.870976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.871149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.871213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.871521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.871569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.871713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.871742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.871925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.871989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.872230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.872265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.872479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.872508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.872661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.872727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.872956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.872991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.873217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.873245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.873488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.873549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.873698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.873746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.873963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.873991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.439 qpair failed and we were unable to recover it. 00:29:29.439 [2024-07-26 11:37:24.874197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.439 [2024-07-26 11:37:24.874260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.874509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.874538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.874678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.874706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.874924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.874988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.875240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.875274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.875458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.875488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.875625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.875654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.875917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.875952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.876170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.876199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.876409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.876507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.876699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.876731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.876907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.876935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.877157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.877222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.877509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.877538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.877690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.877719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.877936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.878000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.878281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.878316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.878483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.878512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.878701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.878765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.879030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.879065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.879269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.879297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.879510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.879539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.879701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.879756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.879945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.879973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.880137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.880201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.880461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.880508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.880670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.880698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.880879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.880943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.881174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.881210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.881434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.881472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.881635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.881663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.881928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.881963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.882190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.882218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.882444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.882516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.882749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.882784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.883037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.883066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.883297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.883361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.883595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.440 [2024-07-26 11:37:24.883624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.440 qpair failed and we were unable to recover it. 00:29:29.440 [2024-07-26 11:37:24.883818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.883846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.884061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.884126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.884420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.884479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.884640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.884669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.884848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.884911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.885153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.885188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.885391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.885420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.885574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.885602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.885796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.885830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.886016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.886045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.886274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.886338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.886586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.886615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.886809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.886837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.887044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.887108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.887409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.887505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.887726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.887755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.887952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.888017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.888313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.888378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.888652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.888682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.888879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.888944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.889206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.889242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.889444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.889482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.889623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.889652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.889933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.889968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.890171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.890205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.890454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.890525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.890665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.890694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.890907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.890935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.891121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.891184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.891450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.891498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.891635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.891663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.891858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.891922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.892179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.892214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.892382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.892411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.892591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.892620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.892841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.892876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.893064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.893092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.441 qpair failed and we were unable to recover it. 00:29:29.441 [2024-07-26 11:37:24.893275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.441 [2024-07-26 11:37:24.893338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.893599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.893628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.893814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.893843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.894065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.894129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.894366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.894402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.894603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.894632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.894810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.894875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.895126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.895161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.895368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.895396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.895587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.895616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.895813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.895847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.896050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.896078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.896297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.896361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.896635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.896664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.896866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.896895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.897073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.897137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.897354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.897389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.897616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.897645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.897863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.897926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.898182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.898412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.898450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.898716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.898779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.899059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.899094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.899287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.899315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.899518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.899547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.899713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.899748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.899942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.899970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.900153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.900226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.900494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.900523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.900704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.900733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.442 qpair failed and we were unable to recover it. 00:29:29.442 [2024-07-26 11:37:24.900905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.442 [2024-07-26 11:37:24.900968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.901240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.901276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.901506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.901536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.901696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.901760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.902045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.902079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.902262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.902326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.902565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.902594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.902783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.902817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.903038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.903066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.903280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.903345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.903579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.903608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.903846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.903874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.904139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.904203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.904479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.904529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.904654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.904682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.904871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.904934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.905213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.905247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.905467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.905495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.905759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.905821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.906086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.906121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.906320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.906349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.906517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.906546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.906702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.906748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.906964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.906992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.907231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.907297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.907520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.907549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.907713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.907742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.907941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.908004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.908286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.908320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.908510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.908539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.908703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.908767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.909056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.909091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.909293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.909321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.909525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.909555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.909724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.909758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.909959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.909988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.910230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.443 [2024-07-26 11:37:24.910293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.443 qpair failed and we were unable to recover it. 00:29:29.443 [2024-07-26 11:37:24.910548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.910584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.910777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.910806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.911032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.911065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.911280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.911344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.911609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.911638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.911837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.911900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.912151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.912186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.912408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.912446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.912588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.912616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.912805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.912841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.913061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.913090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.913280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.913344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.913634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.913664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.914966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.915041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.915322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.915388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.915640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.915676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.915828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.915857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.916022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.916086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.916352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.916388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.916607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.916637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.916845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.916909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.917160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.917195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.917368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.917398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.917578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.917608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.917762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.917797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.917984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.918012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.918217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.918282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.918503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.918533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.918693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.918722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.918938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.919004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.919225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.919260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.919487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.919516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.919652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.919681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.919859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.919894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.920084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.920113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.920316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.920381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.920640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.920668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.444 qpair failed and we were unable to recover it. 00:29:29.444 [2024-07-26 11:37:24.920858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.444 [2024-07-26 11:37:24.920886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.921095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.921159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.921415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.921485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.921688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.921717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.921935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.921999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.922266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.922331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.922594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.922623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.922822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.922887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.923143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.923179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.924665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.924698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.924929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.924996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.925261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.925297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.925472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.925501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.925647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.925694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.925902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.925937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.927183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.927259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.927530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.927560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.927759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.927827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.928118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.928154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.928396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.928425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.928585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.928614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.928806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.928841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.929059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.929087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.929301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.929364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.929654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.929683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.929897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.929926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.930151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.930215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.930466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.930513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.930694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.930723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.930894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.930958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.931211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.931252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.931493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.931522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.931710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.931775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.932033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.932067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.932280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.932308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.932512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.932541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.932721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.932769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.445 [2024-07-26 11:37:24.932954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-26 11:37:24.932982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.445 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.933210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.933274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.933512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.933540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.933694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.933723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.933901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.933966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.934218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.934253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.934408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.934453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.934657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.934707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.934982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.935017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.935212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.935240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.935457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.935524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.935677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.935706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.935945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.935974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.936167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.936231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.936500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.936529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.936715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.936747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.936962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.937027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.937289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.937324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.937575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.937604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.937808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.937872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.938179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.938215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.938484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.938513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.938666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.938746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.939004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.939040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.939233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.939261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.939465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.939527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.939730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.939765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.940002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.940031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.940215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.940280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.940543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.940573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.940785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.940813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.446 qpair failed and we were unable to recover it. 00:29:29.446 [2024-07-26 11:37:24.941088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-26 11:37:24.941152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.941474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.941525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.941667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.941699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.941844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.941909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.942192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.942227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.942501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.942530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.942697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.942726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.942898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.942933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.943158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.943186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.943387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.943467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.943708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.943758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.944011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.944040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.944239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.944303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.944530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.944560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.944741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.944769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.944937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.945000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.945298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.945333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.945505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.945535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.945747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.945813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.946067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.946102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.946300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.946329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.946543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.946573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.946729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.946776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.946969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.946998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.947192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.947257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.947526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.947556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.947747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.947776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.947992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.948057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.948368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.948536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.948566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.948798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.948863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.949130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.949165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.949382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.949411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.949603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.949632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.949877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.949913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.950117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.950145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.950363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.950445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.447 [2024-07-26 11:37:24.950670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-26 11:37:24.950698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.447 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.950909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.950938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.951167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.951231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.951513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.951542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.951719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.951748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.951990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.952063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.952316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.952351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.952584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.952614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.952794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.952858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.953133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.953169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.953362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.953391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.953610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.953640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.953910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.953945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.954164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.954192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.954354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.954419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.954673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.954702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.954919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.954948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.955174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.955238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.955487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.955516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.955704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.955733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.955923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.955988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.956212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.956246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.956451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.956481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.956661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.956720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.956978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.957013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.957201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.957229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.957457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.957514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.957670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.957699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.957917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.957945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.958160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.958223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.958486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.958516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.958696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.958724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.958901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.958965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.959208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.959243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.959471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.959500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.959715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.959779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.960074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.960109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.960391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.448 [2024-07-26 11:37:24.960420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.448 qpair failed and we were unable to recover it. 00:29:29.448 [2024-07-26 11:37:24.960651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.960707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.960947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.960982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.961198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.961226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.961424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.961511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.961665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.961693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.961873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.961901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.962128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.962192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.962452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.962506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.962714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.962742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.962978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.963042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.963324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.963388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.963657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.963686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.963920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.963985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.964271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.964306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.964526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.964556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.964788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.964853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.965149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.965185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.965390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.965419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.965608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.965638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.965874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.965910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.966100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.966128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.966353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.966419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.966683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.966712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.966921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.966950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.967157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.967223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.967516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.967546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.967731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.967759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.967967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.968031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.968300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.968334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.968564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.968594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.968782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.968847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.969115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.969150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.969381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.969410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.969607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.969635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.969865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.969900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.970135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.970164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.970346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.970410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.449 [2024-07-26 11:37:24.970662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.449 [2024-07-26 11:37:24.970691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.449 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.970906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.970935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.971160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.971224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.971508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.971537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.971753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.971782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.972027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.972091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.972377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.972412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.972678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.972706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.972910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.972974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.973227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.973262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.973461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.973495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.973717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.973782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.974051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.974086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.974306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.974370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.974652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.974681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.974896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.974931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.975139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.975167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.975395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.975477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.975674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.975702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.975904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.975933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.976148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.976213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.976472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.976518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.976728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.976756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.976997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.977061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.977301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.977337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.977550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.977580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.977806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.977871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.978151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.978186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.978415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.978452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.978639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.978691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.978956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.978991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.979231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.979259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.979466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.979523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.979702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.979753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.979972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.980000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.980199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.980264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.980533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.980563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.980718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.450 [2024-07-26 11:37:24.980747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.450 qpair failed and we were unable to recover it. 00:29:29.450 [2024-07-26 11:37:24.980972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.981037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.981306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.981371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.981634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.981663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.981834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.981899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.982178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.982213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.982415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.982452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.982626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.982655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.982919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.982954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.983125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.983153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.983344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.983409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.983616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.983645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.983851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.983880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.984069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.984144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.984399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.984458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.984656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.984685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.984883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.984946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.985212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.985247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.985479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.985509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.985696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.985760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.986012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.986047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.986281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.986310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.986515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.986544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.986733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.986769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.987005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.987033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.987352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.987416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.987632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.987660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.987845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.987873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.988093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.988157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.988457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.988514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.988742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.988770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.989038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.989102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.989330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.451 [2024-07-26 11:37:24.989396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.451 qpair failed and we were unable to recover it. 00:29:29.451 [2024-07-26 11:37:24.989682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.989711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.989966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.990031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.990310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.990374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.990680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.990709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.990929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.990993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.991269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.991304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.991548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.991577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.991770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.991835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.992125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.992160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.992376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.992481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.992703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.992732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.993010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.993045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.993267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.993295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.993509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.993539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.993709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.993760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.993926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.993954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.994116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.994180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.994469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.994514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.994720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.994749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.994951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.995014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.995312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.995386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.995661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.995690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.995924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.995988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.996269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.996304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.996501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.996531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.996697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.996760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.997051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.997086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.997315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.997344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.997552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.997597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.997778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.997814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.998069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.998097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.998321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.998385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.998673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.998702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.998875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.998903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.999129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.999194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.999460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.999509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.999694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.999723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.452 qpair failed and we were unable to recover it. 00:29:29.452 [2024-07-26 11:37:24.999920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.452 [2024-07-26 11:37:24.999984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.000263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.000298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.000498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.000528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.000706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.000770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.001062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.001096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.001314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.001342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.001574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.001641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.001923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.001958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.002144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.002173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.002395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.002477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.002716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.002766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.003041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.003069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.003300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.003364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.003689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.003718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.003912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.003941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.004138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.004202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.004504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.004555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.004763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.004792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.005023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.005087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.005378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.005457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.005717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.005746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.006000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.006064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.006360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.006426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.006674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.006707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.006908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.006973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.007237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.007273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.007495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.007525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.007722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.007786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.008069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.008104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.008282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.008311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.008530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.008559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.008772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.008807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.009055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.009084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.009301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.009365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.009662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.009691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.009873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.009902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.010125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.010190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.010493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.453 [2024-07-26 11:37:25.010523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.453 qpair failed and we were unable to recover it. 00:29:29.453 [2024-07-26 11:37:25.010698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.010726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.010923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.010988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.011267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.011303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.011529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.011558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.011782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.011847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.012100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.012135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.012333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.012361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.012585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.012615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.012831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.012866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.013120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.013150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.013351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.013417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.013665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.013694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.013913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.013942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.014160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.014225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.014509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.014538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.014697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.014726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.014916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.014980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.015272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.015307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.015509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.015538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.015765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.015831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.016129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.016164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.016452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.016481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.016665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.016730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.017000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.017034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.017256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.017284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.017468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.017543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.017830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.017865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.018107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.018136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.018344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.018407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.018689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.018737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.019018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.019047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.019292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.019356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.019664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.019693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.019898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.019927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.020142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.020206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.020496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.020526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.020730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.020758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.454 qpair failed and we were unable to recover it. 00:29:29.454 [2024-07-26 11:37:25.020961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.454 [2024-07-26 11:37:25.021025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.021313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.021348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.021643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.021673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.021885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.021949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.022212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.022248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.022456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.022503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.022686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.022732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.023032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.023067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.023316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.023344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.023576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.023641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.023926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.023961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.024190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.024218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.024410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.024490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.024709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.024738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.024955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.024984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.025218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.025283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.025578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.025607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.025820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.025849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.026117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.026180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.026444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.026492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.026683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.026712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.026918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.026951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.027138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.027172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.027402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.027447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.027656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.027685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.027870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.027903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.028085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.028113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.028307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.028372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.028641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.028675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.028856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.028884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.029086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.029150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.029464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.029519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.029749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.029778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.030043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.030108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.030417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.030502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.030731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.030759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.030967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.031031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.455 [2024-07-26 11:37:25.031310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.455 [2024-07-26 11:37:25.031345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.455 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.031575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.031604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.031822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.031887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.032170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.032205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.032426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.032463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.032705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.032770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.033046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.033081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.033268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.033297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.033497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.033532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.033757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.033792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.033990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.034019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.034246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.034311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.034585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.034615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.034808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.034837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.035080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.035145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.035404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.035448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.035674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.035703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.035898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.035964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.036247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.036282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.036512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.036542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.036734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.036800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.037081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.037116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.037320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.037349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.037589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.037618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.037802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.037837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.038013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.038262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.038326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.038586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.038615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.038820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.038849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.039045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.039109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.039414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.039501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.039701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.039734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.456 [2024-07-26 11:37:25.039958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.456 [2024-07-26 11:37:25.040022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.456 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.040335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.040399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.040652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.040681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.040869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.040934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.041187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.041222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.041453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.041483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.041708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.041776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.042059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.042094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.042312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.042342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.042554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.042583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.042748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.042784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.043004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.043033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.043190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.043254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.043555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.043585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.043788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.043817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.044034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.044098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.044382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.044417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.044595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.044624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.044847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.044911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.045192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.045227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.045448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.045477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.045649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.045707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.045995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.046030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.046282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.046310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.046526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.046556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.046745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.046783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.047003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.047032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.047190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.047255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.047507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.047537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.047741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.047769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.047968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.048032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.048300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.048364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.048650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.048679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.048891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.048956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.049242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.049277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.049540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.049570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.049771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.049835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.050129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.457 [2024-07-26 11:37:25.050164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.457 qpair failed and we were unable to recover it. 00:29:29.457 [2024-07-26 11:37:25.050409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.050445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.050652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.050730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.051005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.051040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.051255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.051284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.051469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.051525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.051736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.051771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.052033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.052062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.052261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.052326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.052621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.052650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.052851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.052880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.053086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.053151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.053411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.053456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.053687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.053716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.053914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.053978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.054261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.054296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.054523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.054553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.054771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.054836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.055087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.055122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.055334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.055362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.055582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.055611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.055800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.055835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.056050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.056079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.056234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.056298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.056574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.056603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.056805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.056834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.057041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.057105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.057402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.057497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.057726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.057755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.058004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.058069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.058368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.058452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.058681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.058710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.058935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.059000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.059293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.059357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.059647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.059676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.059884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.059948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.060239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.060275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.060549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.060578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.458 [2024-07-26 11:37:25.060800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.458 [2024-07-26 11:37:25.060865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.458 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.061139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.061175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.061424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.061460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.061686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.061750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.062039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.062076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.062320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.062349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.062541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.062571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.062767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.062803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.063028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.063058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.063268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.063332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.063604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.063633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.063838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.063868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.064099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.064163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.064446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.064496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.064644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.064672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.064868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.064931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.065216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.065251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.065451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.065481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.065711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.065776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.066059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.066093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.066305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.066334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.066569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.066599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.066821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.066856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.067147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.067176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.067392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.067451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.067653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.067681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.067912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.067940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.068143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.068207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.068500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.068530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.068734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.068762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.069002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.069066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.069319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.069360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.069586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.069615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.069807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.069872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.070159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.070194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.070423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.070458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.070687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.070720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.459 [2024-07-26 11:37:25.070896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.459 [2024-07-26 11:37:25.070931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.459 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.071130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.071159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.071382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.071462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.071707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.071735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.071953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.071981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.072214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.072248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.072450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.072496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.072702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.072730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.072891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.072925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.073097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.073130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.073339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.073368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.073572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.073601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.073811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.073845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.074061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.074089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.074310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.074343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.074591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.074806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.074835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.075059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.075092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.075317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.075352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.075558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.075587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.736 [2024-07-26 11:37:25.075814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.736 [2024-07-26 11:37:25.075847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.736 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.076034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.076070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.076294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.076323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.076515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.076554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.076707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.076753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.076982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.077010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.077229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.077293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.077566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.077595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.077735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.077763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.077922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.077986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.078268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.078303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.078523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.078551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.078760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.078824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.079076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.079111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.079311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.079344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.079538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.079567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.079742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.079777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.080011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.080039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.080274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.080339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.080638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.080667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.080805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.080833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.081072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.081137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.081420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.081490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.081716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.081745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.082033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.082097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.082374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.082408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.082678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.082707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.082921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.082986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.083283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.083318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.083539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.083568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.083792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.083855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.084111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.084146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.084362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.084391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.084606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.084635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.737 qpair failed and we were unable to recover it. 00:29:29.737 [2024-07-26 11:37:25.084917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.737 [2024-07-26 11:37:25.084952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.085222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.085250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.085469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.085522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.085719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.085755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.085997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.086026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.086222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.086255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.086489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.086542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.086748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.086777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.086959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.086988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.087195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.087229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.087403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.087441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.087609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.087637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.087817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.087853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.088044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.088072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.088259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.088322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.088628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.088658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.088837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.088866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.089083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.089147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.089475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.089505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.089713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.089742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.089969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.090044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.090345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.090380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.090659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.090688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.090905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.090969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.091239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.091273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.091500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.091530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.091658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.091723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.091990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.092025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.092221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.092250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.092476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.092527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.092714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.092765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.092964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.092992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.093224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.093289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.093567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.093596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.093815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.093844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.094109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.094173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.094453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.094500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.094724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.094753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.095035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.095100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.095387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-26 11:37:25.095415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.738 qpair failed and we were unable to recover it. 00:29:29.738 [2024-07-26 11:37:25.095597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.095626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.095818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.095882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.096160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.096195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.096434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.096463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.096640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.096700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.096983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.097018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.097227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.097255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.097484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.097513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.097746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.097781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.098067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.098096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.098309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.098373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.098665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.098694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.098901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.098929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.099117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.099181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.099467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.099511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.099714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.099743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.099957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.100021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.100301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.100336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.100536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.100564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.100791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.100855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.101133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.101174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.101443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.101472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.101704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.101768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.102051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.102086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.102312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.102341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.102539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.102568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.102722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.102769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.102982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.103010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.103193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.103256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.103554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.103583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.103799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.103827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.104101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.104164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.104464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.104519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.104713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.104741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.104968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.105032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.105298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.105361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.105642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.105671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.105885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.105949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.106175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-26 11:37:25.106210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.739 qpair failed and we were unable to recover it. 00:29:29.739 [2024-07-26 11:37:25.106416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.106488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.106698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.106773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.107052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.107087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.107312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.107340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.107572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.107608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.107807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.107842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.108054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.108083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.108299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.108363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.108681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.108710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.108983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.109011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.109216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.109281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.109567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.109596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.109785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.109814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.110025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.110089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.110368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.110403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.110613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.110641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.110829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.110893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.111140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.111175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.111392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.111421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.111620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.111649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.111877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.111912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.112124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.112157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.112317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.112381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.112659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.112689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.112858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.112887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.113088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.113152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.113440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.113476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.113711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.113740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.114026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.114090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.114413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.114500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.114739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.114767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.115035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.115098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.115397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.115479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.115693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.115722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.115977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.116040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.116343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.116407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.740 [2024-07-26 11:37:25.116605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-26 11:37:25.116633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.740 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.116779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.116843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.117087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.117122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.117341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.117369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.117552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.117581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.117800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.117835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.118073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.118102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.118337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.118400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.118632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.118660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.118819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.118847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.119026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.119089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.119372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.119407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.119595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.119624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.119856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.119920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.120201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.120237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.120399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.120434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.120628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.120657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.120941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.120977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.121232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.121260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.121414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.121504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.121708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.121737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.121912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.121941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.122130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.122331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.122366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.122591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.122620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.122803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.122877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.123125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.123158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.123351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.123380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.123597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.123627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.123842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.123876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.124083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.124112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.741 [2024-07-26 11:37:25.124289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.741 [2024-07-26 11:37:25.124322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.741 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.124528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.124558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.124757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.124786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.125014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.125079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.125350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.125384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.125595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.125624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.125850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.125914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.126197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.126232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.126419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.126499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.126719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.126771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.127050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.127085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.127350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.127586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.127615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.127827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.127862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.128116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.128145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.128344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.128408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.128651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.128679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.128884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.128912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.129136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.129200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.129496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.129525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.129708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.129736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.129979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.130044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.130296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.130331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.130526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.130554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.130754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.130817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.131101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.131136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.131323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.131352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.131519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.131548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.131726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.131762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.131955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.131983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.132206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.132270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.132524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.132553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.132729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.132757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.132986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.133049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.133353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.133426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.133667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.133696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.133888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.742 [2024-07-26 11:37:25.133953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.742 qpair failed and we were unable to recover it. 00:29:29.742 [2024-07-26 11:37:25.134244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.134279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.134459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.134489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.134664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.134693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.134909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.134944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.135176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.135204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.135363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.135426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.135683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.135719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.135912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.135940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.136152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.136215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.136503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.136532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.136744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.136772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.137073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.137137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.137400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.137442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.137682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.137711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.137995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.138058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.138316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.138351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.138540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.138569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.138783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.138847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.139139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.139174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.139396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.139425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.139656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.139727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.140006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.140041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.140223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.140252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.140484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.140529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.140725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.140775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.141012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.141041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.141283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.141346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.141629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.141658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.141826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.141854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.142055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.142119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.142399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.142491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.142723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.142752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.143037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.743 [2024-07-26 11:37:25.143101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.743 qpair failed and we were unable to recover it. 00:29:29.743 [2024-07-26 11:37:25.143384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.143475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.143660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.143689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.143888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.143952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.144202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.144236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.144434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.144468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.144657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.144721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.144989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.145024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.145212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.145240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.145446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.145519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.145749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.145784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.146072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.146100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.146368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.146449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.146671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.146700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.146898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.146927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.147164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.147227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.147502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.147531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.147735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.147764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.147995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.148059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.148360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.148630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.148659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.744 [2024-07-26 11:37:25.148857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.744 [2024-07-26 11:37:25.148922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.744 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.149211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.149246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.149446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.149475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.149657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.149730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.150023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.150058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.150311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.150340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.150539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.150568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.150788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.150824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.151074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.151102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.151301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.151365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.151634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.151663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.151873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.151902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.152177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.152240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.152492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.152521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.745 [2024-07-26 11:37:25.152736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.745 [2024-07-26 11:37:25.152764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.745 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.153052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.153117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.153395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.153437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.153635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.153664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.153857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.153923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.154144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.154178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.154392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.154421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.154656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.154706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.154988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.155023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.155240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.155269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.155506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.155567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.155769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.155804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.156052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.156081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.156280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.156344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.156645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.156674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.156920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.746 [2024-07-26 11:37:25.156949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.746 qpair failed and we were unable to recover it. 00:29:29.746 [2024-07-26 11:37:25.157180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.157243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.157517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.157547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.157725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.157754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.157964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.158028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.158307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.158341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.158546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.158575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.158773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.158838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.159114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.159394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.159422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.159617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.159646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.159878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.159913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.160115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.160144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.160325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.160390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.747 qpair failed and we were unable to recover it. 00:29:29.747 [2024-07-26 11:37:25.160654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.747 [2024-07-26 11:37:25.160683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.748 qpair failed and we were unable to recover it. 00:29:29.748 [2024-07-26 11:37:25.160913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.748 [2024-07-26 11:37:25.160941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.748 qpair failed and we were unable to recover it. 00:29:29.748 [2024-07-26 11:37:25.161140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.748 [2024-07-26 11:37:25.161204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.748 qpair failed and we were unable to recover it. 00:29:29.748 [2024-07-26 11:37:25.161457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.748 [2024-07-26 11:37:25.161505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.748 qpair failed and we were unable to recover it. 00:29:29.748 [2024-07-26 11:37:25.161686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.748 [2024-07-26 11:37:25.161714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.748 qpair failed and we were unable to recover it. 00:29:29.748 [2024-07-26 11:37:25.161906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.749 [2024-07-26 11:37:25.161970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.750 qpair failed and we were unable to recover it. 00:29:29.750 [2024-07-26 11:37:25.162238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.750 [2024-07-26 11:37:25.162273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.750 qpair failed and we were unable to recover it. 00:29:29.750 [2024-07-26 11:37:25.162506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.750 [2024-07-26 11:37:25.162536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.750 qpair failed and we were unable to recover it. 00:29:29.750 [2024-07-26 11:37:25.162734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.162794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.163081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.163116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.163339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.163367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.163584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.163613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.163830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.163864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.164057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.164086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.164311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.164375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.164682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.164730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.164975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.165004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.165196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.165259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.165522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.165551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.751 qpair failed and we were unable to recover it. 00:29:29.751 [2024-07-26 11:37:25.165707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.751 [2024-07-26 11:37:25.165736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.165925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.165987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.166270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.166310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.166529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.166558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.166775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.166839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.167124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.167159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.167400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.167436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.167617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.167645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.167850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.167885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.168070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.168098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.168311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.168375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.168677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.168706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.168948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.168976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.169212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.169276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.752 [2024-07-26 11:37:25.169518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.752 [2024-07-26 11:37:25.169547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.752 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.169760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.169788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.170069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.170133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.170440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.170499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.170710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.170738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.170986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.171050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.171319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.171383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.171641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.171670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.171870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.171934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.172182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.172217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.172405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.172442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.172624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.172653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.172904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.172938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.173137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.173165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.173392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.173475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.173684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.173730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.173937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.173965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.174204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.174268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.174562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.174591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.174747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.174775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.753 [2024-07-26 11:37:25.174999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.753 [2024-07-26 11:37:25.175064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.753 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.175360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.175424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.175671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.175700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.175936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.175999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.176266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.176330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.176635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.176665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.176823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.176886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.177187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.177222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.177469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.177503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.177652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.177719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.178006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.178041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.754 [2024-07-26 11:37:25.178243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.754 [2024-07-26 11:37:25.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.754 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.178462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.178526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.178747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.178782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.179034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.179062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.179301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.179366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.179679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.179708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.179971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.179999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.180223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.755 [2024-07-26 11:37:25.180288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.755 qpair failed and we were unable to recover it. 00:29:29.755 [2024-07-26 11:37:25.180564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.180593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.180760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.180788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.180972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.181036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.181328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.181364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.181643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.181672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.181843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.181907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.182193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.182473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.182502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.182691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.182728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.183000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.183035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.183225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.183253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.183489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.183535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.183753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.183787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.756 qpair failed and we were unable to recover it. 00:29:29.756 [2024-07-26 11:37:25.183988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.756 [2024-07-26 11:37:25.184017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.184233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.184297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.184595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.184624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.184830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.184859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.185036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.185101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.185395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.185439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.185642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.185671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.185891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.185954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.186230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.186281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.186471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.186500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.186696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.186759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.187019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.187054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.187249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.187277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.187494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.187524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.187690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.187741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.187943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.187971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.188150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.188224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.188516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.188545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.188696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.188725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.188916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.188979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.189231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.189266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.189491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.189520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.189706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.189769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.190056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.190091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.190382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.190410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.190687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.190716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.190963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.190998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.191209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.191237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.191425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.191511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.191734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.757 [2024-07-26 11:37:25.191769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.757 qpair failed and we were unable to recover it. 00:29:29.757 [2024-07-26 11:37:25.192025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.192054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.192268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.192331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.192600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.192628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.192841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.192870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.193119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.193183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.193437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.193472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.193661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.193690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.193916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.193980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.194256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.194291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.194465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.194494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.194697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.194762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.195043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.195078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.195273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.195301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.195524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.195554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.195754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.195788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.758 [2024-07-26 11:37:25.196017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.758 [2024-07-26 11:37:25.196045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.758 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.196214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.196277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.196560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.196590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.196806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.196834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.197089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.197152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.197457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.197513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.197744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.197773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.198039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.198103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.198397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.198492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.198732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.198761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.199004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.199067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.199294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.199329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.199517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.199547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.199769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.199834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.759 [2024-07-26 11:37:25.200118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.759 [2024-07-26 11:37:25.200153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.759 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.200359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.200388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.200548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.200577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.200788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.200823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.201065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.201094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.201289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.201353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.201629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.201658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.201848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.201876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.202043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.202107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.202373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.202454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.202689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.202717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.202914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.202978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.203259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.203293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.203525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.203555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.203758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.203823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.204121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.204156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.204408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.204455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.204626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.204655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.204931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.204965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.205186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.205215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.205385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.205478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.205701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.205729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.205937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.760 [2024-07-26 11:37:25.205965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.760 qpair failed and we were unable to recover it. 00:29:29.760 [2024-07-26 11:37:25.206159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.206223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.206481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.206530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.206731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.206759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.206923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.206987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.207279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.207314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.207546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.207576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.207752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.207817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.208076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.208111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.208330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.208358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.208584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.208613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.208822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.208857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.209089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.209117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.209310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.209374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.209676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.209705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.209949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.209977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.210174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.210237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.210516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.210545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.210758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.210787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.211027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.211091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.211378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.211458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.211710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.211738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.211984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.212048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.761 qpair failed and we were unable to recover it. 00:29:29.761 [2024-07-26 11:37:25.212338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.761 [2024-07-26 11:37:25.212372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.212564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.212593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.212811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.212875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.213153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.213188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.213387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.213415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.213577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.213605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.213806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.213841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.214043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.214072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.214294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.214358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.214625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.214654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.214791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.214820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.215023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.215086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.215337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.215371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.215626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.215656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.215868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.215933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.216214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.216248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.216476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.216506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.216760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.216825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.217107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.217141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.217372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.217405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.217576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.217604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.217789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.217824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.218001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.218029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.218255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.218318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.218636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.218665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.218904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.218932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.219163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.219227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.219527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.219555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.219695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.219724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.219927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.219991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.762 qpair failed and we were unable to recover it. 00:29:29.762 [2024-07-26 11:37:25.220242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.762 [2024-07-26 11:37:25.220277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.220489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.220519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.220707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.220778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.221073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.221108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.221277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.221516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.221545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.221767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.221802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.221962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.221990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.222170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.222233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.222499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.222528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.222710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.222738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.222966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.223029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.223299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.223364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.223614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.223644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.223887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.223952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.224235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.224270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.224524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.224554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.224756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.224820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.225103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.225138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.225361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.225390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.225607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.225636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.225883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.225918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.226122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.226150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.226346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.226411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.226656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.226684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.226885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.226913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.227149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.227213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.227502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.227542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.227722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.227751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.227925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.227999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.228249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.228284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.228507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.228536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.228756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.228820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.229102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.229137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.229368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.229397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.229611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.229640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.229885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.229920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.230118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.230147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.230340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.230405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.230681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.230710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.230997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.231025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.231281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.231344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.231637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.231666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.231848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.231877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.232098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.232163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.232446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.232497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.232681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.232710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.232926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.232990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.233374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.233457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.233673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.233702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.233896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.763 [2024-07-26 11:37:25.233959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.763 qpair failed and we were unable to recover it. 00:29:29.763 [2024-07-26 11:37:25.234233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.234268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.234489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.234518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.234729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.234792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.235131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.235196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.235418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.235464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.235704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.235769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.236059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.236094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.236311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.236339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.236532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.236561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.236766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.236801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.237109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.237154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.237456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.237523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.237741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.237776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.238004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.238032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.238255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.238319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.238614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.238643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.238904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.238983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.239266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.239329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.239611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.239645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.239812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.239841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.240018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.240082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.240458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.240522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.240709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.240738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.240913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.240977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.241231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.241266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.241472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.241501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.241724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.241789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.242069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.242104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.242377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.242406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.242571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.242600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.242784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.242819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.243034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.243062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.243284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.243348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.243659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.243688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.243953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.243981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.244206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.244269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.244594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.244623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.244826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.244854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.245072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.245136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.245387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.245422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.245643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.245672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.245907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.245972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.246240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.246275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.246506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.246535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.764 [2024-07-26 11:37:25.246683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.764 [2024-07-26 11:37:25.246747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.764 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.247048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.247084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.247337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.247365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.247589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.247618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.247843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.247878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.248105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.248133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.248317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.248381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.248647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.248676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.248904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.248932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.249139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.249203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.249523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.249558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.249755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.249783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.250021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.250085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.250417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.250510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.250717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.250753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.251005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.251069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.251416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.251511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.251739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.251768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.252042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.252107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.252388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.252422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.252683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.252712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.253046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.253110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.253410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.253455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.253699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.253727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.254031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.254095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.254351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.254386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.254583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.254611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.254784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.254847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.765 [2024-07-26 11:37:25.255188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.765 [2024-07-26 11:37:25.255246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.765 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.255530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.255560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.255786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.255849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.256171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.256222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.256541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.256570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.256853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.256916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.257238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.257273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.257608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.257666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.257944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.258007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.258293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.258328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.258572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.258601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.258883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.258947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.259302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.259384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.259713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.259776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.260092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.260156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.260471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.260507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.260849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.260901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.261193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.261257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.261579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.261615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.261952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.262007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.262339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.262403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.262655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.262684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.262901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.262930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.263235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.263299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.263637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.263666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.263947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.263975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.264247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.264321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.264623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.264652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.264835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.264864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.265089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.265152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.265495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.265531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.265790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.265819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.266035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.266099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.266357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.266421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.266698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.266727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.267042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.267106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.267499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.267565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.267884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.267913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.268225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.268289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.268609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.268638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.268865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.268894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.269179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.269244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.766 qpair failed and we were unable to recover it. 00:29:29.766 [2024-07-26 11:37:25.269559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.766 [2024-07-26 11:37:25.269595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.269861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.269889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.270112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.270177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.270474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.270510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.270786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.270814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.271044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.271110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.271444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.271497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.271722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.271750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.272042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.272106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.272474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.272510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.272747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.272775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.273076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.273140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.273460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.273515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.273818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.273882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.274201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.274264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.274581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.274616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.274875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.274903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.275178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.275241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.275570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.275626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.275914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.275943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.276141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.276204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.276502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.276532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.276928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.276993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.277275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.277339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.277673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.277708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.277918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.277948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.278163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.278226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.278577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.278607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.278796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.278825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.279062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.279126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.279463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.279524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.279677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.279705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.279935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.280000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.280317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.280352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.280658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.280687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.280910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.280974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.281265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.281301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.281574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.281603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.281782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.281846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.282130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.282165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.282363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.282391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.282551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.282582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.282794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.282829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.283045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.283074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.283234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.283300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.283567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.283596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.283852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.283919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.284172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.284236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.284535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.284564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.284750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.284780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.285019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.285084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.285362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.285397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.285634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.767 [2024-07-26 11:37:25.285663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.767 qpair failed and we were unable to recover it. 00:29:29.767 [2024-07-26 11:37:25.285853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.285919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.286184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.286219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.286444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.286493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.286672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.286726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.287019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.287054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.287398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.287483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.287689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.287746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.288059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.288094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.288445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.288474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.288698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.288761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.289080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.289115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.289408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.289450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.289638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.289667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.289995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.290070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.290387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.290415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.290669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.290698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.291017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.291086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.291500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.291565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.291794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.291859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.292151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.292186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.292479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.292508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.292744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.292808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.293123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.293158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.293504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.293552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.293833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.293897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.294196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.294232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.294454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.294483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.294662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.294726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.295054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.295112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.295436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.295465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.295643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.295688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.296023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.296099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.296436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.296464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.296647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.296704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.297023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.297058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.297382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.297478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.297669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.297698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.297914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.297948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.298176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.298205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.298444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.298522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.298745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.298779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.299000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.299029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.299221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.299285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.299634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.299664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.299948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.299976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.300195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.300260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.300591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.300627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.300937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.300966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.301198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.301262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.301591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.301647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.301960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.301989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.302258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.302331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.302641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.302670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.768 qpair failed and we were unable to recover it. 00:29:29.768 [2024-07-26 11:37:25.302878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.768 [2024-07-26 11:37:25.302907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.303213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.303277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.303559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.303594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.303792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.303820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.304040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.304105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.304456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.304511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.304737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.304765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.304957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.305022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.305352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.305415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.305685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.305714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.306032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.306095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.306409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.306452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.306677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.306706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.306949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.307013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.307286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.307578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.307607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.307806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.307869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.308144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.308180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.308489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.308518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.308733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.308797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.309132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.309198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.309441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.309712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.309777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.310059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.310094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.310288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.769 [2024-07-26 11:37:25.310317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.769 qpair failed and we were unable to recover it. 00:29:29.769 [2024-07-26 11:37:25.310536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.310602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.310931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.310988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.311320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.311374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.311690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.311719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.312063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.312136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.312460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.312506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.312765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.312828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.313105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.313140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.313314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.313342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.313548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.313613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.313908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.313943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.314262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.314329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.314639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.314668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.314887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.314930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.315147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.315176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.315375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.315452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.315657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.315685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.315913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.315941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.316198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.316262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.316558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.316594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.316832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.316860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.317012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.317075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.317365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.317442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.317713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.317741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.318031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.318094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.318414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.318498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.318749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.318777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.319125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.319189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.319512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.319541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.319745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.319773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.770 [2024-07-26 11:37:25.320022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.770 [2024-07-26 11:37:25.320087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.770 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.320401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.320445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.320647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.320676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.320887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.320952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.321259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.321293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.321630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.321659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.321988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.322051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.322369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.322403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.322719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.322748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.323051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.323115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.323443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.323495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.323747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.324129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.324193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.324497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.324533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.324750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.324778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.325051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.325114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.325438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.325488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.325670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.325698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.325931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.325994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.326306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.326340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.326663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.326692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.326999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.327063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.327339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.327374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.327642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.327676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.327878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.328232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.328266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.771 qpair failed and we were unable to recover it. 00:29:29.771 [2024-07-26 11:37:25.328519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.771 [2024-07-26 11:37:25.328548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.328767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.328832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.329158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.329215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.329502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.329531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.329814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.329878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.330167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.330202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.330419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.330455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.330644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.330703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.331004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.331038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.331324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.331352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.331597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.331626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.331842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.331878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.332071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.332099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.332283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.332347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.332643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.332671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.332878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.332906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.333148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.333212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.333508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.333543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.333868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.333896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.334195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.334260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.334548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.334577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.334784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.334812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.335060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.335123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.335487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.335523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.335770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.335799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.336047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.336111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.336385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.336472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.336719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.336748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.336988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.337059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.337363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.337445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.337690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.337718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.337938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.338001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.338318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.338371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.338675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.338704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.338939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.339003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.339320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.339355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.339698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.339763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.340096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.340169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.340471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.340507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.340834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.340885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.341173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.341237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.341562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.341614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.341931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.341959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.342243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.342308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.342626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.342655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.342861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.342889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.343119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.343183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.343479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.343507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.343716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.343744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.343981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.344366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.344416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.344704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.344733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.345068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.345132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.345441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.345487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.772 [2024-07-26 11:37:25.345712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.772 [2024-07-26 11:37:25.345740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.772 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.346045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.346109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.346412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.346457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.346655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.346684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.346906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.346969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.347303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.347374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.347638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.347667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.347912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.347977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.348267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.348301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.348588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.348617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.348849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.348913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.349239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.349294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.349612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.349641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.349951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.350014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.350327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.350361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.350672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.350702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.351019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.351082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.351401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.351465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.351745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.351797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.352099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.352162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.352489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.352547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.352875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.352934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.353259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.353323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.353659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.353707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.354001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.354029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.354260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.354323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.354664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.354693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.355003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.355031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.355280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.355344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.355658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.355687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.355988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.356017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.356338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.356402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.356737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.356772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.357112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.357171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.357500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.357565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.357828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.357863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.358065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.358093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.358278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.358342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.358672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.358701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.359027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.359083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.359371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.359448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.359704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.359733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.359942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.359970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.360270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.360333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.360666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.360694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.360972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.361000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.361200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.361263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.361588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.361624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.361848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.361876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.773 [2024-07-26 11:37:25.362108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.773 [2024-07-26 11:37:25.362173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.773 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.362513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.362553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.362815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.362843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.363108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.363172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.363507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.363542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.363740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.363768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.363987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.364050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.364339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.364375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.364616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.364645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.364831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.364895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.365179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.365214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.365450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.365479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.365747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.365811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.366092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.366127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.366378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.366406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.366611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.366639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.366906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.366941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.367215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.367243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.367510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.367575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.367860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.367895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.368089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.368117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.368339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.368403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.368739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.774 qpair failed and we were unable to recover it. 00:29:29.774 [2024-07-26 11:37:25.369107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.774 [2024-07-26 11:37:25.369136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.369457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.369532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.369795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.369830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.370142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.370170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.370467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.370533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.370855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.370890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.371287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.371351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.775 qpair failed and we were unable to recover it. 00:29:29.775 [2024-07-26 11:37:25.371666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.775 [2024-07-26 11:37:25.371695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-07-26 11:37:25.371982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-07-26 11:37:25.372017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-07-26 11:37:25.372340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-07-26 11:37:25.372404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-07-26 11:37:25.372709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-07-26 11:37:25.372737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-07-26 11:37:25.373057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-07-26 11:37:25.373120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-07-26 11:37:25.373442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.776 [2024-07-26 11:37:25.373471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.776 qpair failed and we were unable to recover it. 00:29:29.776 [2024-07-26 11:37:25.373703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.373767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.374088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.374123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.374439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.374468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.374656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.374732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.375065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.375130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.375450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.375501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.375783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.375847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.376181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.376250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.376563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.376592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.376867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.376930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.377259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.377314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.377645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.377691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.777 [2024-07-26 11:37:25.377894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.777 [2024-07-26 11:37:25.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.777 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.378131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.378166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.378395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.378424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.378639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.378668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.378903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.378937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.379218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.379246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.379484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.379559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.379901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.379934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.380150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.380178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:29.778 [2024-07-26 11:37:25.380413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.778 [2024-07-26 11:37:25.380455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:29.778 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.380686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.380734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.380969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.380998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.381230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.381295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.381611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.381640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.381821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.381850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.382096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.382129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.382389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.382422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.382742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.382777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.383003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.383036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.383228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.383261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.383505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.383535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.383787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.383850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.384108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.384141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.384405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.384618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.384646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.384957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.053 [2024-07-26 11:37:25.384990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.053 qpair failed and we were unable to recover it. 00:29:30.053 [2024-07-26 11:37:25.385308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.385354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.385571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.385599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.385788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.385823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.386041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.386069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.386302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.386366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.386697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.386726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.387061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.387089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.387334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.387408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.387741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.388125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.388154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.388565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.388630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.388976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.389051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.389364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.389392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.389682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.389711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.390018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.390053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.390311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.390339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.390526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.390591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.390884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.390919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.391160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.391188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.391415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.391494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.391773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.391837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.392182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.392235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.392567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.392633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.392955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.392990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.393279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.393307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.393560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.393625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.393952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.394020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.394349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.394415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.394699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.394777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.395093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.395128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.395466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.395519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.395846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.395909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.396249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.396315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.396644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.396673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.397014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.397078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.397392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.054 [2024-07-26 11:37:25.397457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.054 qpair failed and we were unable to recover it. 00:29:30.054 [2024-07-26 11:37:25.397720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.397748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.398084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.398148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.398463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.398499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.398830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.398859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.399192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.399254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.399559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.399595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.399867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.399895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.400081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.400144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.400441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.400478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.400725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.400753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.401084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.401146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.401479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.401513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.401740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.401768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.402084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.402148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.402403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.402447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.402658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.402686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.402907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.402972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.403288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.403321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.403673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.403702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.404049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.404113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.404382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.404417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.404666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.404694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.404941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.405005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.405270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.405305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.405524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.405553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.405785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.405850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.406171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.406206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.406531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.055 [2024-07-26 11:37:25.406560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.055 qpair failed and we were unable to recover it. 00:29:30.055 [2024-07-26 11:37:25.406800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.406864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.407163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.407199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.407495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.407524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.407751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.407815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.408139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.408174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.408453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.408482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.408706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.408770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.409082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.409117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.409453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.409482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.409685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.409750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.410083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.410118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.410424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.410460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.410624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.410653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.410947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.410982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.411240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.411269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.411548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.411578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.411761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.411790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.412021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.412050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.412272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.412336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.412663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.412709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.412983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.413011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.413210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.413275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.413570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.413599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.413787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.413820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.414041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.414106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.056 [2024-07-26 11:37:25.414423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.056 [2024-07-26 11:37:25.414467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.056 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.414734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.414796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.415075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.415139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.415454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.415504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.415740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.415800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.416125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.416190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.416514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.416543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.416753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.416781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.417100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.417165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.417505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.417535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.417740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.417769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.417991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.418055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.418361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.418396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.418633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.418662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.418940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.419004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.419317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.419352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.419695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.419749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.420029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.420095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.420395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.420438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.420748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.420805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.421102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.421167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.421504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.421533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.421710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.421738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.421953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.422017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.422313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.422348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.422692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.422721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.423024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.423087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.423390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.423426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.057 [2024-07-26 11:37:25.423719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.057 [2024-07-26 11:37:25.423796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.057 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.424070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.424134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.424454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.424504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.424648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.424677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.424899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.424964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.425283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.425318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.425654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.425683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.425919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.425984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.426306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.426359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.426665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.426694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.426924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.426998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.427316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.427351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.427694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.427743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.428040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.428103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.428419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.428465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.428701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.428730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.429052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.429115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.429400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.429444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.429693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.429721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.430047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.430111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.430426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.430470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.430721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.430749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.431083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.431149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.431465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.431518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.431775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.431842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.432122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.432185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.058 qpair failed and we were unable to recover it. 00:29:30.058 [2024-07-26 11:37:25.432507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.058 [2024-07-26 11:37:25.432536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.432753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.432782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.433074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.433137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.433464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.433512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.433697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.433726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.433918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.433981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.434298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.434333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.434629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.434658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.434839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.434903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.435244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.435309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.435645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.435692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.435986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.436051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.436333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.436368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.436579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.436608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.436827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.436891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.437215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.437251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.437609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.437639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.437888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.059 [2024-07-26 11:37:25.437952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.059 qpair failed and we were unable to recover it. 00:29:30.059 [2024-07-26 11:37:25.438291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.438362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.438698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.438727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.439029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.439093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.439393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.439445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.439662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.439691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.439889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.439954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.440261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.440299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.440492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.440521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.440720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.440785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.441110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.441144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.441441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.441470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.441649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.441696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.441879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.441913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.442096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.442124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.442355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.442418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.442739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.442772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.442962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.442990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.443210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.443273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.443602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.443631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.443826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.443855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.444028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.444091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.444387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.444422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.444744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.444797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.445116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.445180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.445523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.445570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.445793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.060 [2024-07-26 11:37:25.445821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.060 qpair failed and we were unable to recover it. 00:29:30.060 [2024-07-26 11:37:25.446130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.446193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.446474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.446509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.446757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.446785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.447127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.447190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.447521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.447568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.447814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.447872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.448159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.448223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.448562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.448592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.448763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.448792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.448964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.449027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.449302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.449337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.449545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.449574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.449793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.449857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.450147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.450181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.450369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.450397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.450586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.450615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.450752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.450787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.450949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.450978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.451234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.451299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.451590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.451619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.451831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.451865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.452168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.452233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.452501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.452530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.452716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.452744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.453025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.453089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.453394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.061 [2024-07-26 11:37:25.453473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.061 qpair failed and we were unable to recover it. 00:29:30.061 [2024-07-26 11:37:25.453770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.453833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.454143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.454207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.454526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.454555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.454807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.454874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.455191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.455255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.455571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.455600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.455806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.455834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.456038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.456101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.456436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.456472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.456636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.456665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.456862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.456925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.457218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.457253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.457469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.457498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.457775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.457839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.458158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.458212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.458547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.458575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.458782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.458847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.459145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.459180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.459486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.459515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.459761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.459790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.460111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.460146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.460447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.460477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.460692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.460756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.461065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.461099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.461442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.461471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.461678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.062 [2024-07-26 11:37:25.461742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.062 qpair failed and we were unable to recover it. 00:29:30.062 [2024-07-26 11:37:25.462028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.462063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.462324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.462353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.462675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.462704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.462918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.462952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.463155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.463183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.463402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.463484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.463726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.463762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.464074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.464102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.464449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.464521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.464790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.464849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.465117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.465145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.465362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.465427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.465726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.465762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.466053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.466082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.466307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.466371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.466708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.466753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.467018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.467046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.467212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.467275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.467562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.467592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.467795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.467824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.468121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.468185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.468491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.468520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.468737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.468765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.469057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.469121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.469449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.469485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.469708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.469737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.063 [2024-07-26 11:37:25.469926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.063 [2024-07-26 11:37:25.469990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.063 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.470312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.470363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.470707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.470735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.471074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.471138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.471459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.471512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.471734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.471762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.472045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.472109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.472449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.472499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.472692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.472720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.472972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.473037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.473323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.473358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.473583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.473612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.473788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.473852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.474113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.474149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.474362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.474390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.474553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.474582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.474761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.474796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.474967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.474996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.475196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.475260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.475547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.475577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.475765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.475793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.475958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.476022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.476319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.476360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.476632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.476662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.476825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.476889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.477177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.477212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.064 [2024-07-26 11:37:25.477442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.064 [2024-07-26 11:37:25.477471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.064 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.477663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.477740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.477996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.478030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.478241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.478270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.478472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.478528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.478750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.478785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.479000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.479028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.479249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.479313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.479581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.479611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.479796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.479824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.480053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.480116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.480404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.480450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.480696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.480724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.481015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.481079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.481380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.481415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.481638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.481666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.481868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.482222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.482256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.482491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.482521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.482691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.482756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.483043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.483078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.483314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.483342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.483548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.483578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.483839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.483912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.484187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.484216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.484419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.484508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.065 [2024-07-26 11:37:25.484736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-07-26 11:37:25.484771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.065 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.485048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.485077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.485361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.485424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.485748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.485807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.486128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.486190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.486533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.486611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.486873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.486908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.487237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.487265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.487542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.487606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.487906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.487941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.488272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.488349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.488669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.488698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.488987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.489022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.489337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.489398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.489762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.489826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.490113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.490148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.490456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.490503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.490758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.490786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.491126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.491182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.491471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-07-26 11:37:25.491500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.066 qpair failed and we were unable to recover it. 00:29:30.066 [2024-07-26 11:37:25.491779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.491842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.492123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.492158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.492419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.492456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.492655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.492726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.493025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.493060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.493352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.493380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.493609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.493638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.493831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.493866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.494071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.494099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.494325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.494389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.494731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.494766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.495096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.495160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.495453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.495518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.495794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.495860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.496181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.496225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.496536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.496602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.496919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.496954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.497348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.497424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.497747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.497810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.498131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.498165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.498488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.498516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.498816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.498881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.499175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.499210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.499483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.499511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.499770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.499833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.500166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.500225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.067 qpair failed and we were unable to recover it. 00:29:30.067 [2024-07-26 11:37:25.500549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-07-26 11:37:25.500578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.500913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.500978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.501228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.501263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.501484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.501513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.501667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.501752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.502081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.502152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.502477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.502534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.502860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.502924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.503237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.503271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.503609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.503666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.503990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.504054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.504367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.504403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.504684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.504712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.504945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.505009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.505326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.505361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.505701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.505730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.506031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.506095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.506439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.506509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.506773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.506821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.507086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.507150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.507495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.507524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.507766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.507820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.508097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.508160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.508489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.508554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.508836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.508864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.509123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.509186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.068 [2024-07-26 11:37:25.509446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.068 [2024-07-26 11:37:25.509483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.068 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.509694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.509723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.509888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.509952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.510238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.510272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.510567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.510596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.510907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.510972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.511304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.511364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.511671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.511700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.511908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.511972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.512267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.512302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.512534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.512563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.512759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.512823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.513141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.513176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.513467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.513496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.513676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.513740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.514077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.514140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.514401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.514435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.514652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.514729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.515046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.515102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.515386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.515414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.515659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.515724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.516041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.516076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.516400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.516437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.516745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.516809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.517128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.517163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.517513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.517570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.069 [2024-07-26 11:37:25.517860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.069 [2024-07-26 11:37:25.517924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.069 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.518254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.518308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.518660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.518689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.518965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.519030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.519351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.519386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.519711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.519740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.519996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.520060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.520353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.520388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.520685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.520714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.520962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.521026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.521311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.521346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.521546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.521575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.521737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.521800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.522090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.522125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.522383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.522412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.522656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.522737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.523059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.523094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.523453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.523498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.523711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.523764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.524084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.524141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.524442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.524471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.524685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.524748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.525039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.525074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.525326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.525355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.525582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.525648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.525939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.525974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.070 [2024-07-26 11:37:25.526215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.070 [2024-07-26 11:37:25.526244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.070 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.526535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.526570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.526831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.526899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.527171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.527200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.527491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.527556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.527860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.527894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.528137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.528165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.528394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.528473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.528712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.528763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.529018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.529046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.529266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.529330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.529631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.529661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.529872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.529900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.530186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.530251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.530512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.530548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.530760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.530788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.531018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.531081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.531359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.531394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.531671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.531700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.532041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.532104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.532440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.532475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.532750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.532827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.533146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.533210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.533517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.533546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.533718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.533747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.533935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.533999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.071 qpair failed and we were unable to recover it. 00:29:30.071 [2024-07-26 11:37:25.534330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.071 [2024-07-26 11:37:25.534392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.534667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.534696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.534925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.534989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.535312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.535376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.535683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.535712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.535988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.536052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.536405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.536504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.536755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.536803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.537140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.537205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.537530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.537559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.537737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.537765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.537982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.538046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.538350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.538414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.538708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.538770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.539045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.539110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.539403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.539485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.539731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.539760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.540027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.540093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.540424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.540466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.540676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.540705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.540897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.540962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.541285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.541320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.541671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.072 [2024-07-26 11:37:25.541718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.072 qpair failed and we were unable to recover it. 00:29:30.072 [2024-07-26 11:37:25.541996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.542059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.542389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.542424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.542747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.542816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.543139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.543202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.543531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.543589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.543879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.543908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.544076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.544139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.544483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.544518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.544783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.544812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.545157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.545219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.545504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.545539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.545864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.545928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.546256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.546322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.546642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.546672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.546848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.546876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.547085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.547148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.547446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.547495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.547704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.547733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.548035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.548099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.548498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.548533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.548749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.548778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.549001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.549064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.549397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.549475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.549713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.549742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.549963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.550036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.550352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.550387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.550694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.550723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.073 [2024-07-26 11:37:25.550997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.073 [2024-07-26 11:37:25.551060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.073 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.551347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.551383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.551597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.551626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.551812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.551876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.552125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.552160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.552395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.552423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.552644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.552672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.552947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.552983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.553251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.553279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.553585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.553650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.553932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.553967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.554235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.554264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.554471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.554537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.554860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.554895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.555232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.555291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.555618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.555647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.555867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.555902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.556121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.556149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.556363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.556426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.556677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.556723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.556974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.557002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.557194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.557258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.557568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.557603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.557931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.557959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.558305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.558369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.558639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.558667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.558881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.558909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.559227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.559292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.559612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.559641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.559806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.559834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.560024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.560089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.560419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.560506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.560752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.560781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.561086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.561150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.561497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.561526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.561778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.561840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.562157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.562221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.562545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.074 [2024-07-26 11:37:25.562587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.074 qpair failed and we were unable to recover it. 00:29:30.074 [2024-07-26 11:37:25.562919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.562947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.563293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.563357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.563685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.563715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.564025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.564053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.564341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.564406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.564747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.564782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.565076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.565104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.565341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.565405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.565728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.565763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.566085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.566408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.566497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.566820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.566883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.567193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.567222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.567534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.567600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.567887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.567921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.568131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.568159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.568395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.568478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.568724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.568758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.569032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.569061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.569253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.569318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.569616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.569645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.569820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.569849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.570051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.570114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.570407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.570452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.570758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.570822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.571137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.571201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.571508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.571544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.571816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.571844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.572092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.572156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.572476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.572511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.572838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.572867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.573215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.573278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.573530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.573559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.573763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.573792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.574001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.574066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.574440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.574497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.574694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.574722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.075 qpair failed and we were unable to recover it. 00:29:30.075 [2024-07-26 11:37:25.574948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.075 [2024-07-26 11:37:25.575013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.575334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.575369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.575718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.575770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.576092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.576156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.576471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.576506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.576814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.576843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.577096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.577161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.577454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.577489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.577731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.577760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.577987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.578050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.578386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.578471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.578716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.578745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.579029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.579093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.579419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.579462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.579691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.579720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.579990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.580053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.580392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.580482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.580685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.580715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.580900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.580963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.581256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.581291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.581514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.581544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.581768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.581832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.582161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.582229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.582513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.582542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.582749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.582813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.583148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.583217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.583513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.583542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.583759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.583823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.584139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.584193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.584479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.584508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.584775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.584839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.585115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.585150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.585405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.585448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.585614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.585649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.585983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.586036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.586328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.586356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.586539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.586568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.076 [2024-07-26 11:37:25.586741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.076 [2024-07-26 11:37:25.586774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.076 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.586947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.586976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.587157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.587221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.587541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.587570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.587798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.587826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.588162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.588237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.588528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.588564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.588843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.588872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.589098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.589162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.589470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.589506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.589815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.589843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.590092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.590156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.590442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.590478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.590752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.590823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.591141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.591205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.591496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.591532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.591723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.591752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.591950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.592013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.592307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.592342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.592630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.592659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.592840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.592904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.593218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.593253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.593574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.593603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.593936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.593999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.594328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.594386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.594713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.594742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.595047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.595110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.595391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.595426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.595672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.595700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.596014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.596079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.596369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.596404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.596687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.077 [2024-07-26 11:37:25.596715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.077 qpair failed and we were unable to recover it. 00:29:30.077 [2024-07-26 11:37:25.596998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.597061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.597385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.597420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.597713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.597741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.598054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.598118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.598404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.598449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.598684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.598712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.599009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.599073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.599358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.599392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.599626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.599655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.599856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.599921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.600249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.600301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.600623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.600652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.600878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.600943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.601239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.601279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.601602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.601631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.601968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.602032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.602346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.602381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.602676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.602704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.602929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.602993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.603307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.603342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.603687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.603715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.604003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.604066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.604382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.604417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.604780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.604843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.605125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.605190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.605512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.605548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.605836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.605864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.606110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.606174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.606426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.606469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.606677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.606706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.606948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.607012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.607325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.607359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.607687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.607716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.608049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.608114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.608437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.608472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.608720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.608748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.609080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.609143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.078 [2024-07-26 11:37:25.609460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.078 [2024-07-26 11:37:25.609508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.078 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.609700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.609729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.609963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.610027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.610368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.610443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.610718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.610763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.611098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.611163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.611490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.611547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.611872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.611936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.612235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.612298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.612611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.612640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.612797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.612825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.613047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.613111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.613398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.613477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.613709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.614003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.614067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.614339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.614374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.614654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.614687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.615022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.615086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.615410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.615454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.615771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.615843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.616164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.616229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.616546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.616582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.616908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.616937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.617246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.617310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.617606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.617634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.617838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.617867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.618063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.618127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.618409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.618452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.618682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.618710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.618969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.619033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.619336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.619371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.619571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.619600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.619794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.619857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.620154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.620189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.620490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.620520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.620718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.620782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.621062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.621097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.621258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.079 [2024-07-26 11:37:25.621287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.079 qpair failed and we were unable to recover it. 00:29:30.079 [2024-07-26 11:37:25.621520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.621585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.621888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.621923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.622264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.622326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.622652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.622680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.622906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.622942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.623236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.623264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.623557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.623621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.623930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.623966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.624227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.624255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.624479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.624545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.624872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.624908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.625206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.625234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.625498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.625563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.625871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.625905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.626237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.626297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.626580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.626609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.626824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.626859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.627015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.627044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.627250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.627323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.627649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.627677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.628004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.628032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.628324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.628389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.628741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.628777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.629068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.629096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.629277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.629342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.629607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.629637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.629856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.629885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.630181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.630245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.630546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.630582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.630905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.630934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.631258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.631322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.631674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.631742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.632064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.632093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.632422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.632502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.632700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.632746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.633020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.633049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.633384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.633461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.080 qpair failed and we were unable to recover it. 00:29:30.080 [2024-07-26 11:37:25.633777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.080 [2024-07-26 11:37:25.633813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.634158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.634211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.634552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.634581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.634790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.634825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.635124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.635152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.635416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.635503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.635751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.635787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.636087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.636115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.636374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.636472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.636757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.636792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.637124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.637153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.637478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.637514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.637743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.637778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.638123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.638182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.638461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.638524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.638711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.638760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.639053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.639450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.639515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.639683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.639711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.639959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.639987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.640179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.640243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.640554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.640594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.640898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.640926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.641199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.641264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.641517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.641552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.641778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.641806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.642045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.642109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.642354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.642388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.642633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.642662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.642909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.642972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.643274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.643309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.643584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.643613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.643796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.643859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.644142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.644177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.644405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.644728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.644792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.645091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.645125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.081 qpair failed and we were unable to recover it. 00:29:30.081 [2024-07-26 11:37:25.645476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.081 [2024-07-26 11:37:25.645505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.645708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.645772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.646054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.646089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.646365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.646394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.646751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.646814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.647102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.647136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.647350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.647378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.647617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.647646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.647941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.647976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.648314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.648365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.648698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.648727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.649071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.649126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.649444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.649473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.649773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.649837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.650177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.650240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.650528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.650556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.650777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.650842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.651179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.651252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.651571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.651600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.651850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.651915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.652217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.652251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.652543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.652571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.652810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.652874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.653162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.653197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.653420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.653462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.653678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.653741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.654019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.654054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.654271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.654299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.654538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.654568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.654759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.654793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.655034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.655062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.082 qpair failed and we were unable to recover it. 00:29:30.082 [2024-07-26 11:37:25.655252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.082 [2024-07-26 11:37:25.655316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.655577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.655605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.655760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.655789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.655971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.656035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.656322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.656356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.656661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.656690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.656931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.656993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.657284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.657319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.657527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.657556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.657776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.657839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.658161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.658196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.658485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.658514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.658724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.658788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.659056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.659091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.659302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.659330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.659520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.659549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.659768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.659803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.660027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.660056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.660297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.660361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.660714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.660778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.661121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.661150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.661509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.661539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.661739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.661773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.661941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.661970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.662161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.662225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.662498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.662527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.662707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.662736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.662934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.662999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.663335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.663389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.663727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.663802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.664114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.664178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.664508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.664536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.664758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.664787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.665072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.665144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.665471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.665522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.665688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.665728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.665921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.665985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.083 [2024-07-26 11:37:25.666321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.083 [2024-07-26 11:37:25.666391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.083 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.666729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.666779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.667103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.667167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.667485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.667530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.667709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.667737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.667935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.667999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.668312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.668347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.668692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.668722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.668982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.669046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.669348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.669384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.669680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.669708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.670046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.670110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.670426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.670699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.670727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.671013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.671078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.671412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.671507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.671686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.671715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.671905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.671969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.672216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.672250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.672454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.672483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.672712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.672778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.673098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.673151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.673472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.673501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.673723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.673788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.674074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.674109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.674399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.674435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.674601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.674640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.674871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.674906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.675108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.675136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.675360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.675424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.675735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.675763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.676098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.676127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.676497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.676526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.676761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.676795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.677112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.677140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.677342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.677407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.677650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.677682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.677890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.084 [2024-07-26 11:37:25.677918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.084 qpair failed and we were unable to recover it. 00:29:30.084 [2024-07-26 11:37:25.678158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.678221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.678511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.678540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.678772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.678801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.679136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.679199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.679492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.679521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.679757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.679785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.680121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.680184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.680474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.680520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.680742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.680771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.681061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.681125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.681406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.681459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.681762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.681826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.682132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.682198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.682499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.682529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.682738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.682766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.683057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.683121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.683457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.683515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.683709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.683738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.683917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.683982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.684193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.684228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.684444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.684474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.684713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.684778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.685079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.685114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.685451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.685480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.685659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.685732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.686035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.686108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.686424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.686482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.686664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.686692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.686911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.686944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.687200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.687228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.687491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.687520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.687733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.687797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.688107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.688142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.688403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.688442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.688618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.688646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.688938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.689003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.689274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.689308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.689558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.085 [2024-07-26 11:37:25.689588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.085 qpair failed and we were unable to recover it. 00:29:30.085 [2024-07-26 11:37:25.689802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.689837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.690162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.690226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.690463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.690520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.690727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.690755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.690968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.691003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.691211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.691275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.691583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.691611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.691823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.691851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.692161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.692215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.692523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.692552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.692749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.692784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.693016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.693044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.693271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.693305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.693547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.693576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.693853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.693886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.694118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.694147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.694365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.694455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.694715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.694764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.694991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.695219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.695248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.695464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.695521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.695813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.695846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.696058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.696092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.696276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.696349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.696654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.696683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.696896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.696929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.697186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.697221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.697561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.697597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.697776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.697809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.697977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.698010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.086 [2024-07-26 11:37:25.698229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.086 [2024-07-26 11:37:25.698264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.086 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.698484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.698513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.698720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.698754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.699018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.699083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.699410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.699454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.699798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.699831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.700021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.700054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.700211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.700244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.700447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.700497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.700691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.700720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.364 [2024-07-26 11:37:25.700950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.364 [2024-07-26 11:37:25.701001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.364 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.701292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.701325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.701612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.701641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.701791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.701820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.702056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.702090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.702327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.702391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.702666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.702694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.702920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.702948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.703162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.703197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.703463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.703527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.703751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.703786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.704102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.704165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.704503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.704532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.704779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.704842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.705136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.705172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.705383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.705411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.705573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.705601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.705844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.705915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.706225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.706260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.706587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.706616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.706841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.706876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.707202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.707266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.707595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.707624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.707852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.707880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.708177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.708212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.708488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.708535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.708765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.708800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.709107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.709140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.709452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.709502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.709690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.710094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.710130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.710414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.710449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.710642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.710689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.710970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.711034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.711325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.711360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.711591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.711620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.365 qpair failed and we were unable to recover it. 00:29:30.365 [2024-07-26 11:37:25.711830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.365 [2024-07-26 11:37:25.711864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.712106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.712171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.712509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.712558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.712744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.712772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.712991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.713026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.713321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.713386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.713690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.713736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.714013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.714041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.714277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.714313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.714577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.714606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.714801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.714836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.715073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.715101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.715292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.715355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.715694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.715723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.715994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.716029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.716247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.716275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.716508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.716537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.716765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.716829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.717153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.717188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.717494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.717523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.717738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.717773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.717996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.718059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.718359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.718422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.718699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.718728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.718944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.718979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.719241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.719305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.719639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.719667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.720016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.720084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.720381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.720417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.720744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.720808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.721075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.721110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.721325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.721358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.721603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.721633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.721830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.721895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.722221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.722277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.722571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.722600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.722797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.366 [2024-07-26 11:37:25.722832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.366 qpair failed and we were unable to recover it. 00:29:30.366 [2024-07-26 11:37:25.723081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.723145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.723485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.723515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.723704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.723733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.723955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.723990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.724199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.724263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.724570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.724598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.724766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.724795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.724978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.725013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.725246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.725310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.725630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.725659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.725988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.726052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.726381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.726461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.726702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.726759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.727040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.727075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.727354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.727382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.727725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.727801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.728087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.728151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.728460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.728508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.728721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.728750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.729071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.729128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.729472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.729501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.729663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.729691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.729879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.729908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.730121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.730156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.730456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.730522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.730746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.730781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.731042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.731070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.731220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.731254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.731498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.731527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.731760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.731795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.732035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.732064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.732283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.732318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.732533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.732580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.732787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.732822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.732999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.733032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.733246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.733281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.367 [2024-07-26 11:37:25.733545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.367 [2024-07-26 11:37:25.733574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.367 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.733755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.733791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.734061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.734090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.734382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.734416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.734753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.734816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.735094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.735129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.735419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.735454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.735716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.735767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.736115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.736179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.736487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.736523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.736811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.736839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.737092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.737127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.737471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.737537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.737854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.737906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.738226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.738254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.738591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.738626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.738874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.738938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.739259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.739314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.739639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.739668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.739991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.740055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.740368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.740450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.740702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.740750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.741034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.741095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.741400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.741444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.741771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.741834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.742169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.742239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.742543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.742572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.742855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.742890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.743114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.743178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.743455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.743490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.743789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.743851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.744172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.744207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.744554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.744619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.744958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.745028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.745321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.745349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.745573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.745602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.745852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.745915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.368 [2024-07-26 11:37:25.746196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.368 [2024-07-26 11:37:25.746231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.368 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.746518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.746551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.746771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.746805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.747107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.747172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.747487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.747555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.747913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.747975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.748257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.748292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.748552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.748616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.748944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.748978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.749132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.749160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.749325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.749359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.749578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.749607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.749749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.749783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.749976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.750005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.750220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.750255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.750523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.750742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.750776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.750977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.751006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.751222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.751257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.751555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.751619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.751932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.751966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.752298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.752347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.752694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.752755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.753013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.753077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.753399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.753468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.753717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.753746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.754045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.754079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.754376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.754458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.754749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.754784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.369 qpair failed and we were unable to recover it. 00:29:30.369 [2024-07-26 11:37:25.755113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.369 [2024-07-26 11:37:25.755177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.755486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.755522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.755782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.755845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.756106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.756141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.756380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.756408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.756637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.756666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.756957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.757020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.757343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.757403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.757717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.757771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.758091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.758126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.758457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.758523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.758758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.759092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.759127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.759441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.759477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.759665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.759693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.759987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.760022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.760267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.760296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.760521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.760550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.760707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.760771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.761073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.761107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.761359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.761387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.761610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.761639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.761841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.761906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.762225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.762260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.762569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.762598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.762823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.762857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.763182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.763246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.763542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.763577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.763874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.763902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.764150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.764185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.764413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.764493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.764741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.764776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.765022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.765051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.765225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.765260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.765492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.765558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.765773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.765808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.766023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.370 [2024-07-26 11:37:25.766051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.370 qpair failed and we were unable to recover it. 00:29:30.370 [2024-07-26 11:37:25.766305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.766340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.766684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.766746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.767064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.767100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.767434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.767463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.767677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.767712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.767980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.768043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.768372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.768460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.768756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.768829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.769124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.769158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.769463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.769529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.769854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.769915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.770214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.770242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.770467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.770503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.770810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.770873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.771185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.771220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.771511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.771544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.771769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.771804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.772051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.772115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.772411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.772456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.772733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.772807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.773086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.773121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.773375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.773456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.773758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.773811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.774124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.774152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.774496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.774559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.774888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.774951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.775283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.775336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.775658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.775687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.775978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.776013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.776294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.776359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.776700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.776749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.777069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.777097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.777422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.777467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.777685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.777757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.778078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.778130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.778477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.778507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.371 qpair failed and we were unable to recover it. 00:29:30.371 [2024-07-26 11:37:25.778659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.371 [2024-07-26 11:37:25.778704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.778950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.779014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.779315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.779350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.779646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.779675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.779900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.779935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.780210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.780273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.780559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.780595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.780818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.780847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.781032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.781067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.781303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.781367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.781707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.781761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.782082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.782110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.782418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.782464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.782722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.782786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.783093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.783128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.783443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.783472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.783654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.783703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.783950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.784013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.784341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.784405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.784707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.784739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.785078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.785134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.785393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.785476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.785744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.785778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.786064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.786092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.786401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.786445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.786653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.786681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.786967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.787001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.787268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.787296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.787542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.787571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.787750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.787814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.788124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.788159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.788450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.788479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.788683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.788718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.789015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.789078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.789415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.789498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.789779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.789853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.790168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.372 [2024-07-26 11:37:25.790221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.372 qpair failed and we were unable to recover it. 00:29:30.372 [2024-07-26 11:37:25.790508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.790544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.790762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.790798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.791068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.791096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.791320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.791355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.791576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.791605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.791813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.791848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.792140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.792169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.792485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.792515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.792706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.792770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.793072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.793107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.793445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.793506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.793831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.793885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.794172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.794236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.794565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.794621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.794869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.794898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.795111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.795146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.795372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.795453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.795702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.795755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.796031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.796060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.796275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.796340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.796660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.796688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.797018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.797071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.797355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.797388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.797654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.797683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.797891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.797955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.798288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.798357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.798652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.798681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.798860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.798895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.799124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.799188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.799515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.799575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.799907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.799972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.800316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.800382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.800695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.800766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.801042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.801076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.801324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.801353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.801611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.801666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.802002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.373 [2024-07-26 11:37:25.802066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.373 qpair failed and we were unable to recover it. 00:29:30.373 [2024-07-26 11:37:25.802391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.802425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.802749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.802822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.803127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.803162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.803485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.803551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.803838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.803873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.804164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.804192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.804384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.804420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.804668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.804724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.805046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.805080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.805409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.805446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.805704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.805764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.806046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.806109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.806399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.806442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.806673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.806702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.807007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.807059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.807328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.807391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.807678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.807706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.807869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.807897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.808114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.808148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.808413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.808507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.808716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.808763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.809016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.809045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.809197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.809231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.809462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.809527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.809809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.809844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.810067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.810100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.810326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.810390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.810713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.810788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.811101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.811135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.811471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.811516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.374 [2024-07-26 11:37:25.811717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.374 [2024-07-26 11:37:25.811752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.374 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.812002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.812066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.812378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.812413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.812734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.812788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.813124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.813192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.813519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.813585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.813876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.813911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.814237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.814305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.814639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.814668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.814966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.815030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.815345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.815380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.815724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.815776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.816030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.816065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.816268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.816333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.816666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.816694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.816984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.817013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.817223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.817258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.817489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.817525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.817761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.817796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.818131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.818179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.818462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.818498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.818787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.818851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.819166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.819201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.819507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.819536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.819773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.819808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.820070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.820133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.820392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.820439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.820618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.820646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.820819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.820854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.821088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.821153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.821472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.821507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.821813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.821841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.822064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.822099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.822401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.822478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.822766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.822827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.823101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.823136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.823360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.823424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.375 qpair failed and we were unable to recover it. 00:29:30.375 [2024-07-26 11:37:25.823711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.375 [2024-07-26 11:37:25.823781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.824088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.824122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.824473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.824502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.824755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.824790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.825098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.825162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.825474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.825511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.825829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.825877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.826194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.826249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.826521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.826557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.826793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.826850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.827181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.827229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.827511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.827540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.827746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.827810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.828124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.828159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.828416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.828461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.828647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.828694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.828934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.828998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.829281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.829315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.829596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.829646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.829963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.829998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.830337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.830401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.830694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.830740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.831032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.831061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.831306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.831341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.831598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.831663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.831986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.832042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.832329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.832357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.832638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.832667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.832986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.833050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.833382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.833465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.833755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.833832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.834159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.834233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.834513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.834548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.834818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.834892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.835224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.835278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.835601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.835637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.835942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.836005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.376 qpair failed and we were unable to recover it. 00:29:30.376 [2024-07-26 11:37:25.836251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.376 [2024-07-26 11:37:25.836286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.836509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.836543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.836801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.836835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.837139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.837203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.837530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.837601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.837933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.837984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.838320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.838383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.838702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.838776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.839052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.839087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.839305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.839334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.839590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.839642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.839968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.840032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.840317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.840352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.840584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.840612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.840782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.840817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.841102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.841167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.841488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.841539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.841836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.841864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.842104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.842139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.842414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.842492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.842702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.842751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.843010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.843038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.843321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.843355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.843684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.843737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.844057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.844091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.844446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.844509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.844698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.844744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.844995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.845058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.845291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.845326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.845555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.845584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.845849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.845884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.846122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.846186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.846486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.846522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.846746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.846775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.846939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.846975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.847189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.377 [2024-07-26 11:37:25.847252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.377 qpair failed and we were unable to recover it. 00:29:30.377 [2024-07-26 11:37:25.847596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.847674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.847937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.847966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.848147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.848182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.848392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.848481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.848781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.848816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.849062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.849091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.849310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.849375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.849659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.849688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.849893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.849928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.850190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.850218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.850450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.850485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.850781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.850846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.851184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.851252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.851540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.851569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.851836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.851872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.852181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.852244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.852547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.852583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.852906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.852971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.853256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.853291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.853527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.853592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.853874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.853909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.854150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.854178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.854421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.854467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.854675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.854740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.855017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.855051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.855258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.855287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.855464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.855520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.855722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.855791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.856035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.856070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.856251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.856280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.856481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.856527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.856765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.856831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.857117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.857158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.857486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.857538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.857771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.378 [2024-07-26 11:37:25.857805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.378 qpair failed and we were unable to recover it. 00:29:30.378 [2024-07-26 11:37:25.858142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.858206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.858503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.858539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.858858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.858886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.859168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.859204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.859413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.859507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.859771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.859805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.860142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.860171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.860513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.860570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.860867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.860931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.861219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.861254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.861516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.861546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.861797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.861832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.862132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.862196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.862521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.862578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.862850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.862878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.863085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.863119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.863365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.863442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.863740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.863775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.864100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.864129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.864483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.864512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.864780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.864843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.865133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.865168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.865444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.865474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.865711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.865745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.866013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.866078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.866408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.866487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.866779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.866857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.867178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.867230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.867512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.867577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.867897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.867932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.868225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.868253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.868461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.868496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.868775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.379 [2024-07-26 11:37:25.868839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.379 qpair failed and we were unable to recover it. 00:29:30.379 [2024-07-26 11:37:25.869168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.869231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.869509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.869538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.869737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.869771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.870025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.870090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.870372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.870412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.870655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.870684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.870902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.870937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.871169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.871233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.871524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.871560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.871766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.871795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.872023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.872059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.872381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.872475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.872741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.872776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.873060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.873088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.873323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.873358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.873641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.873670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.873909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.873944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.874176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.874205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.874419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.874505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.874791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.874856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.875172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.875206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.875551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.875601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.875860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.875929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.876220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.876284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.876601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.876637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.876952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.876980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.877237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.877271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.877469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.877535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.877838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.877873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.878202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.878250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.878575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.878632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.878964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.879028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.879356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.879413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.879735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.879810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.880135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.880195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.880533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.880598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.880921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.380 [2024-07-26 11:37:25.880979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.380 qpair failed and we were unable to recover it. 00:29:30.380 [2024-07-26 11:37:25.881269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.881297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.881511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.881547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.881764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.881827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.882100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.882135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.882371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.882399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.882540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.882568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.882799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.882863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.883140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.883180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.883415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.883451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.883659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.883713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.884022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.884085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.884402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.884503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.884760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.884831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.885073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.885107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.885320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.885383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.885675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.885703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.885929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.885957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.886160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.886196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.886486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.886521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.886736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.886771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.887059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.887087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.887350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.887384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.887686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.887715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.887939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.887973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.888208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.888236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.888392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.888434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.888591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.888619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.888801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.888836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.889079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.889107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.889290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.889325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.889536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.889602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.889919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.889974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.890289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.890317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.890651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.890680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.891001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.891065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.891398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.891469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.891726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.381 [2024-07-26 11:37:25.891755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.381 qpair failed and we were unable to recover it. 00:29:30.381 [2024-07-26 11:37:25.892080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.892115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.892399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.892500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.892739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.892774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.893045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.893073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.893283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.893318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.893537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.893602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.893928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.894002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.894313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.894341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.894633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.894662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.894858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.894922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.895211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.895251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.895551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.895580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.895807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.895842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.896176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.896240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.896535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.896571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.896923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.897002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.897281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.897315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.897562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.897591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.897786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.897820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.898018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.898047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.898235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.898270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.898498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.898562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.898888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.898943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.899258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.899286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.899666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.899695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.900023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.900087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.900376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.900410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.900746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.900813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.901112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.901147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.901485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.901550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.901894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.901967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.902286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.902314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.902631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.902659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.902889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.902953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.903262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.903297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.903582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.903611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.903811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.382 [2024-07-26 11:37:25.903846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.382 qpair failed and we were unable to recover it. 00:29:30.382 [2024-07-26 11:37:25.904083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.904146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.904385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.904476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.904707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.904736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.905078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.905150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.905462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.905529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.905800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.905861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.906179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.906207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.906567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.906596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.906883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.906946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.907271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.907326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.907647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.907676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.907921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.907956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.908281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.908345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.908679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.908712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.909047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.909075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.909417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.909485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.909823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.909888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.910167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.910202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.910421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.910459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.910643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.910689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.910939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.911003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.911321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.911355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.911669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.911698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.911927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.911962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.912180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.912244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.912530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.912559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.912770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.912798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.913093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.913128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.913423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.913512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.913738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.383 [2024-07-26 11:37:25.913773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.383 qpair failed and we were unable to recover it. 00:29:30.383 [2024-07-26 11:37:25.914042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.914071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.914285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.914321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.914617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.914646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.914820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.914855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.915051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.915079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.915271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.915334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.915658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.915687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.915908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.915943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.916202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.916231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.916456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.916505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.916752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.916816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.917111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.917146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.917439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.917468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.917717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.917751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.918060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.918124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.918458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.918508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.918718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.918746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.918904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.918938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.919141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.919206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.919470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.919518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.919702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.919731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.919924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.919958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.920206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.920270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.920569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.920603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.920859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.920919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.921209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.921244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.921505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.921556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.921773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.384 [2024-07-26 11:37:25.921808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.384 qpair failed and we were unable to recover it. 00:29:30.384 [2024-07-26 11:37:25.922132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.922160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.922515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.922544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.922731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.922794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.923121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.923176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.923479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.923508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.923710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.923745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.923983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.924046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.924310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.924344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.924528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.924557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.924766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.924802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.925000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.925065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.925362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.925397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.925644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.925673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.925830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.925865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.926078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.926141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.926438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.926474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.926751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.926825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.927117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.927152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.927370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.927451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.927681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.927709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.927953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.927982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.928149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.928184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.928462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.928532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.928788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.928823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.929151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.929179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.385 [2024-07-26 11:37:25.929536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.385 [2024-07-26 11:37:25.929565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.385 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.929812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.929875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.930151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.930186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.930482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.930510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.930752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.930787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.931075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.931139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.931400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.931478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.931691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.931720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.931946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.931981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.932217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.932281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.932594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.932628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.932840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.932868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.933144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.933179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.933403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.933487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.933700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.933747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.934024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.934053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.934379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.934414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.934638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.934667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.934866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.934901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.935134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.935163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.935378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.935413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.935648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.935677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.935934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.935969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.936171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.936199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.936478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.936507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.936695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.936725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.386 [2024-07-26 11:37:25.936912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.386 [2024-07-26 11:37:25.936947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.386 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.937128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.937156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.937350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.937385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.937631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.937660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.937899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.937934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.938179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.938208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.938446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.938493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.938691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.938755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.939057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.939093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.939444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.939473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.939631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.939886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.939952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.940282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.940316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.940615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.940644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.940808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.940843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.941040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.941104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.941445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.941493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.941661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.941689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.941887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.941922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.942095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.942160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.942443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.942493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.942661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.942693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.942875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.942910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.943151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.943214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.943533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.943584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.943800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.387 [2024-07-26 11:37:25.943828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.387 qpair failed and we were unable to recover it. 00:29:30.387 [2024-07-26 11:37:25.944162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.944218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.944534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.944564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.944762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.944798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.945042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.945071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.945280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.945344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.945640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.945670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.945824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.945859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.946040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.946069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.946258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.946294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.946612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.946641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.946811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.946846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.947047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.947076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.947291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.947327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.947571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.947600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.947791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.947826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.948025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.948054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.948213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.948248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.948445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.948519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.948711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.948758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.949107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.949171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.949488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.949517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.949661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.949690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.949909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.949947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.950141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.950170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.950384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.950475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.388 [2024-07-26 11:37:25.950664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.388 [2024-07-26 11:37:25.950743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.388 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.951013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.951048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.951242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.951271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.951460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.951520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.951683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.951711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.951977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.952013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.952209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.952237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.952501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.952537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.952717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.952780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.953076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.953111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.953421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.953459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.953628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.953657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.953871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.953935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.954252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.954292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.954634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.954663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.954958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.954994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.955236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.955300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.955572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.955601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.955765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.955794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.956004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.956040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.956282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.956347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.956638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.956667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.956817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.956846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.957052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.957087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.957304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.957368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.957612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.957641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.389 qpair failed and we were unable to recover it. 00:29:30.389 [2024-07-26 11:37:25.957789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.389 [2024-07-26 11:37:25.957818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.958015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.958050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.958264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.958328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.958568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.958598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.958765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.958794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.958975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.959010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.959200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.959263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.959538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.959568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.959751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.959780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.960021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.960058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.960332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.960396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.960622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.960650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.960834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.960862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.961065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.961100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.961394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.961476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.961649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.961685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.961893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.961922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.962101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.962136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.962339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.390 [2024-07-26 11:37:25.962403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.390 qpair failed and we were unable to recover it. 00:29:30.390 [2024-07-26 11:37:25.962637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.962666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.962831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.962859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.963001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.963035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.963319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.963382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.963618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.963647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.963857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.963886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.964250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.964314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.964537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.964565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.964762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.964802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.965045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.965074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.965342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.965379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.965612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.965641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.965855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.965891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.966192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.966260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.966523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.966552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.966756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.966821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.967100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.967135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.967304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.967332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.967522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.967552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.967714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.967778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.968187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.968250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.968489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.968517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.968682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.968725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.968911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.968975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.969275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.969309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.969501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.391 [2024-07-26 11:37:25.969532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.391 qpair failed and we were unable to recover it. 00:29:30.391 [2024-07-26 11:37:25.969721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.969766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.970016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.970079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.970346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.970381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.970561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.970590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.970870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.970933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.971243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.971307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.971596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.971624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.971906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.971971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.972370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.972672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.972705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.972941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.972969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.973186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.973214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.973366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.973395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.973563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.973591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.973772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.973807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.973998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.974026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.974227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.974262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.974499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.974535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.974690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.974718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.975027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.975076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.975337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.975370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.975584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.975613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.975853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.975893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.976287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.976350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.976607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.976636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.392 [2024-07-26 11:37:25.976789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.392 [2024-07-26 11:37:25.976853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.392 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.977160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.977198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.977497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.977526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.977680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.977708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.977941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.978004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.978333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.978385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.978647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.978676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.978858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.978892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.979122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.979185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.979500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.979529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.979713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.979742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.979979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.980014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.980208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.980272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.980524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.980553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.980848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.980911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.981193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.981227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.981457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.981520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.981672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.981701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.981938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.981966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.982123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.982158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.982366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.982479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.982669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.982698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.982894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.982923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.983134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.983168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.983367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.983450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.983635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.983671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.393 [2024-07-26 11:37:25.983901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.393 [2024-07-26 11:37:25.983929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.393 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.984193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.984227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.984447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.984496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.984629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.984657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.984839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.984868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.985075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.985109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.985315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.985379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.985622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.985651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.985823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.985851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.986040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.986075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.986272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.986337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.986587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.986621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.986784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.986812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.987098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.987132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.987362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.987425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.987661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.987699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.987992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.988057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.988358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.988394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.988609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.988638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.988860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.988896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.989131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.989159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.989373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.989454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.989647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.989675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.989864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.989898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.990062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.990090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.990309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.990345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.990565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.990594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.990726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.990772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.990950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.990987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.991132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.991176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.991460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.991522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.394 qpair failed and we were unable to recover it. 00:29:30.394 [2024-07-26 11:37:25.991665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.394 [2024-07-26 11:37:25.991694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.991907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.991936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.992101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.992140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.992398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.992483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.992664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.992692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.992861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.992889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.993101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.993136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.993369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.993632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.993660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.993883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.993912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.994212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.994247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.994522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.994551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.994716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.994993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.995021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.995214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.995249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.995502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.995531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.995672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.995701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.995929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.995958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.996134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.996169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.996372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.996452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.996639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.996668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.996883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.996912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.997090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.997125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.997368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.997445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.997639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.997669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.997856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.997897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.998119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.998154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.998358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.998422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.998626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.998655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.998852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.998880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.999115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.999150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.999364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.999445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.999629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.999657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:25.999821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:25.999849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:26.000017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:26.000052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:26.000255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:26.000320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:26.000597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:26.000632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.395 qpair failed and we were unable to recover it. 00:29:30.395 [2024-07-26 11:37:26.000857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.395 [2024-07-26 11:37:26.000885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.001120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.001155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.001406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.001506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.001669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.001698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.001909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.001937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.002095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.002130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.002323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.002356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.002558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.002588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.002774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.002803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.003008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.003043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.003271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.003309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.003583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.396 [2024-07-26 11:37:26.003612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.396 qpair failed and we were unable to recover it. 00:29:30.396 [2024-07-26 11:37:26.003776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.003805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.003989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.004039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.004200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.004234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.004414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.004458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.004646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.004674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.004856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.004891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.005090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.005123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.005319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.005354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.005550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.005579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.005775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.005809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.006108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.006141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.397 [2024-07-26 11:37:26.006374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.397 [2024-07-26 11:37:26.006408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.397 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.006600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.006630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.006774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.006810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.007015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.007079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.007281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.007318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.007507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.007538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.007721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.007749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.007957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.007987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.008191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.008222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.008399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.008435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.008586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.008616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.008863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.008928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.009235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.009267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.009449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.009478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.009631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.009660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.009841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.009879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.010101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.010135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.010352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.010607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.010636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.010789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.010822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.011046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.011080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.011361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.011389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.011597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.011626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.011814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.011848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.012045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.012079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.012309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.012347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.012552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.012582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.012835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.012874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.013106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.013140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.013412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.013449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.013594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.013624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.013801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.013834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.014029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-07-26 11:37:26.014063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.687 qpair failed and we were unable to recover it. 00:29:30.687 [2024-07-26 11:37:26.014421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.014464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.014631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.014660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.014839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.014876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.015158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.015191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.015465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.015494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.015636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.015665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.015957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.015990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.016245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.016279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.016484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.016514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.016651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.016696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.016838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.016871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.017122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.017156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.017391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.017420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.017591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.017619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.017947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.017980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.018189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.018227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.018464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.018499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.018638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.018683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.018902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.018935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.019230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.019264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.019529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.019563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.019742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.019786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.019989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.020023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.020273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.020308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.020538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.020569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.020725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.020754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.020985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.021018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.021197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.021231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.021387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.021416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.021573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.021607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.021773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.021807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.022000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.022034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.022247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.022276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.022444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.022498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.022663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.022723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.022942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.022975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.688 [2024-07-26 11:37:26.023135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-07-26 11:37:26.023163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.688 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.023408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.023461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.023625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.023653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.023815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.023847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.024092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.024120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.024327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.024359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.024539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.024568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.024710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.024756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.024953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.024980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.025176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.025209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.025383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.025415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.025576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.025603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.025772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.025799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.026061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.026093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.026324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.026356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.026540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.026568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.026739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.026767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.026986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.027027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.027309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.027343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.027550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.027579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.027724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.027753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.027942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.027976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.028253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.028286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.028502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.028532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.028721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.028750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.028988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.029022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.029211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.029244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.029439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.029496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.029640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.029669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.029892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.029927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.030206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.030271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.030529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.030558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.030774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.030803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.031093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.031163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.031426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.031507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.031634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.031662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.031825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.031854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.032054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-07-26 11:37:26.032089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.689 qpair failed and we were unable to recover it. 00:29:30.689 [2024-07-26 11:37:26.032342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.032416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.032630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.032659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.032796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.032825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.032968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.033004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.033328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.033391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.033645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.033674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.033882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.033916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.034191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.034246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.034504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.034533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.034735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.034770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.035113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.035178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.035437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.035494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.035646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.035674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.035916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.035963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.036270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.036335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.036591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.036620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.036874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.036938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.037252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.037287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.037532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.037561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.037743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.037778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.037979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.038347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.038411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.038655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.038683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.038874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.038908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.039132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.039197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.039543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.039572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.039714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.039742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.039927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.039962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.040135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.040200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.040536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.040565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.040709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.040738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.040922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.040956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.041187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.041250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.041574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.041603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.041790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.041819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.042041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.042075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.042371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.042459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.690 [2024-07-26 11:37:26.042651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.690 [2024-07-26 11:37:26.042680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.690 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.042893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.042922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.043246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.043314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.043574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.043609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.043806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.043841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.044042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.044070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.044259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.044294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.044538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.044567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.044746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.044781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.045011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.045040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.045225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.045260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.045461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.045525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.045671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.045699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.045891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.045920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.046178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.046244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.046547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.046576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.046730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.046779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.047000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.047029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.047291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.047348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.047596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.047625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.047807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.047843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.048005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.048034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.048280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.048315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.048598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.048628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.048801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.048836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.049008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.049036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.049214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.049249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.049460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.049521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.049728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.049764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.049957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.691 [2024-07-26 11:37:26.049985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.691 qpair failed and we were unable to recover it. 00:29:30.691 [2024-07-26 11:37:26.050167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.050202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.050422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.050516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.050665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.050693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.050837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.050865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.051083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.051118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.051425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.051504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.051642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.051671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.051875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.051903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.052124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.052159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.052411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.052498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.052663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.052694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.052871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.052900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.053130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.053165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.053420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.053520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.053661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.053690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.053872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.053901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.054133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.054168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.054459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.054527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.054669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.054697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.054908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.054937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.055158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.055194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.055399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.055504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.055640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.055668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.055861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.055890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.056108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.056151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.056417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.056517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.056702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.056752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.057052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.057108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.057464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.057522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.057700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.057749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.058030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.058065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.058231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.058260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.058502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.058532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.058671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.058719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.059081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.059159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.059435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.059487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.059648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.059695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.692 [2024-07-26 11:37:26.059909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.692 [2024-07-26 11:37:26.059973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.692 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.060224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.060259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.060479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.060508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.060669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.060711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.060899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.060932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.061100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.061135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.061370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.061416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.061594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.061623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.061836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.061911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.062172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.062206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.062379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.062407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.062544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.062713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.062766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.063141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.063215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.063532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.063562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.063709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.063744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.063953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.064028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.064421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.064513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.064667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.064695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.064903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.064938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.065160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.065233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.065526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.065555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.065692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.065721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.065876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.065911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.066218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.066282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.066566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.066595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.066757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.066786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.066969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.067004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.067230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.067293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.067554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.067583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.067757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.067785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.067956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.067991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.068230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.068295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.068569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.068599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.068802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.068830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.069068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.069105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.069362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.069441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.069633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.693 [2024-07-26 11:37:26.069661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.693 qpair failed and we were unable to recover it. 00:29:30.693 [2024-07-26 11:37:26.069884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.069913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.070210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.070255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.070513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.070542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.070706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.070751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.071021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.071088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.071373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.071408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.071638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.071667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.071845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.071880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.072054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.072083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.072219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.072254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.072480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.072528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.072683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.072711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.072995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.073023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.073323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.073358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.073617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.073646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.073862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.073897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.074130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.074158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.074454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.074510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.074652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.074694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.074891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.074926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.075109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.075138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.075332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.075366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.075547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.075576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.075825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.075861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.076139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.076168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.076354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.076389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.076584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.076613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.076796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.076831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.077062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.077097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.077297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.077332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.077516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.077546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.077715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.077762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.078004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.078033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.078303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.078354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.078640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.078670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.079005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.079081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.079362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.079390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.694 qpair failed and we were unable to recover it. 00:29:30.694 [2024-07-26 11:37:26.079570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.694 [2024-07-26 11:37:26.079599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.079885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.079950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.080292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.080366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.080598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.080627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.080850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.080885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.081147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.081211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.081525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.081554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.081736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.081765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.081985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.082020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.082261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.082325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.082573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.082602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.082792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.082821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.083093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.083128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.083385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.083463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.083662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.083703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.083915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.083944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.084152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.084186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.084448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.084518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.084661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.084690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.084916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.084945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.085202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.085237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.085462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.085531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.085713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.085759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.085958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.085986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.086149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.086194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.086416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.086518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.086686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.086737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.086938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.086966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.087139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.087182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.087399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.087480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.087658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.087690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.087870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.087898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.088095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.088130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.088410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.088489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.088696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.088975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.089004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.089178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.089213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.695 [2024-07-26 11:37:26.089462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.695 [2024-07-26 11:37:26.089526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.695 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.089669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.089697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.089901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.090135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.090170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.090349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.090413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.090635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.090663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.090798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.090826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.090999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.091034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.091219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.091287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.091551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.091580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.091779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.091807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.092032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.092068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.092358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.092422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.092650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.092679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.092871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.092899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.093094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.093135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.093372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.093454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.093638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.093666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.093870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.093898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.094102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.094136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.094463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.094523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.094676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.094704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.094891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.094919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.095252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.095315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.095578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.095611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.095829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.095864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.096109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.096138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.096416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.096478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.096659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.096687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.097081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.097144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.097459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.097488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.097768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.097803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.696 qpair failed and we were unable to recover it. 00:29:30.696 [2024-07-26 11:37:26.098101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.696 [2024-07-26 11:37:26.098164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.098495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.098557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.098972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.099036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.099394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.099489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.099693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.099749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.100186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.100249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.100560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.100589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.100797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.100832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.101119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.101182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.101504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.101539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.101700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.101728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.101898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.101940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.102165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.102228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.102533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.102569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.102981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.103044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.103386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.103468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.103691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.103749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.104078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.104113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.104451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.104499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.104645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.104673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.104900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.104963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.105306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.105370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.105597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.105625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.105819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.105854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.106196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.106261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.106560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.106589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.106807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.106835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.107278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.107341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.107579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.107608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.107829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.107864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.108087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.108116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.108320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.108359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.108562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.108595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.108793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.108827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.109129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.109186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.109494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.109530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.109761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.109825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.110233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.697 [2024-07-26 11:37:26.110297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.697 qpair failed and we were unable to recover it. 00:29:30.697 [2024-07-26 11:37:26.110585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.110615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.110783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.110818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.111056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.111120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.111489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.111525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.111738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.111767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.111988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.112023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.112284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.112348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.112626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.112655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.112871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.112899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.113166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.113201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.113515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.113544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.113745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.113781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.113970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.113999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.114185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.114220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.114447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.114519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.114677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.114732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.114928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.114957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.115174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.115209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.115519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.115549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.115778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.115813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.116124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.116153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.116510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.116539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.116712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.116740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.116911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.116946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.117164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.117192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.117402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.117446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.117643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.117672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.117920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.117955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.118140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.118168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.118379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.118414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.118628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.118656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.118863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.118899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.119115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.119145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.119339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.119403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.119645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.119679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.119903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.119938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.120164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.698 [2024-07-26 11:37:26.120192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.698 qpair failed and we were unable to recover it. 00:29:30.698 [2024-07-26 11:37:26.120407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.120453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.120686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.120715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.120998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.121033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.121250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.121278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.121499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.121529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.121738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.121802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.122102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.122137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.122393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.122421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.122565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.122593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.122791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.122855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.123152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.123189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.123419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.123455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.123620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.123649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.123871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.123934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.124221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.124256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.124485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.124514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.124708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.124743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.125016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.125080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.125334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.125369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.125562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.125822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.125876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.126155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.126219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.126518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.126547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.126693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.126722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.126923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.126958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.127178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.127243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.127457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.127505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.127725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.127754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.128010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.128044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.128221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.128286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.128575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.128605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.128782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.128810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.129006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.129041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.129247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.129313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.129553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.129582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.129764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.129793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.129991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.130027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.699 [2024-07-26 11:37:26.130255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.699 [2024-07-26 11:37:26.130319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.699 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.130620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.130650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.130869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.130898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.131169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.131204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.131459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.131515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.131767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.131821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.132124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.132153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.132358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.132394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.132621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.132650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.132826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.132861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.133060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.133088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.133307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.133342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.133530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.133560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.133753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.133788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.134001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.134030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.134198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.134234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.134423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.134533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.134719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.134748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.134982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.135010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.135218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.135252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.135457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.135506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.135685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.135731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.135945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.135973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.136181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.136216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.136464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.136520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.136699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.136745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.136965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.136994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.137191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.137231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.137467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.137527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.137759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.137794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.138078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.138107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.138276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.138321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.138529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.138559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.138706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.138754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.138973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.139002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.139215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.139250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.139509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.139538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.139755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.139790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.700 qpair failed and we were unable to recover it. 00:29:30.700 [2024-07-26 11:37:26.140044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.700 [2024-07-26 11:37:26.140073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.140253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.140288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.140510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.140540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.140706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.140753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.140979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.141007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.141187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.141222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.141411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.141501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.141680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.141709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.141928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.141957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.142160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.142195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.142488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.142535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.142716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.142761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.142939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.142967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.143156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.143191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.143404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.143502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.143686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.143714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.143932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.143960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.144152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.144406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.144490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.144671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.144700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.144937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.144965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.145199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.145234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.145509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.145538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.145736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.145771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.145960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.145988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.146176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.146211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.146459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.146512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.146736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.146770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.147066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.147095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.147283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.147324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.147510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.147539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.147733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.701 [2024-07-26 11:37:26.147768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.701 qpair failed and we were unable to recover it. 00:29:30.701 [2024-07-26 11:37:26.147969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.147997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.148186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.148220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.148457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.148705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.148755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.148975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.149003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.149224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.149259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.149514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.149543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.149754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.149789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.149998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.150027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.150215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.150250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.150478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.150531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.150710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.150754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.150974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.151003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.151207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.151243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.151462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.151518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.151704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.151750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.152013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.152041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.152231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.152267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.152450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.152516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.152736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.152771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.152952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.152981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.153191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.153226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.153456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.153506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.153699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.153728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.153952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.153981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.154169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.154204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.154462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.154824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.154859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.155134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.155163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.155359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.155394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.155638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.155666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.155856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.155891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.156089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.156119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.156348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.156412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.156686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.156734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.156989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.157025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.157244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.157272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.702 [2024-07-26 11:37:26.157474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.702 [2024-07-26 11:37:26.157558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.702 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.157776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.157841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.158059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.158095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.158311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.158340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.158560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.158589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.158784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.158848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.159108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.159143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.159322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.159350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.159560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.159589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.159800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.159864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.160141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.160176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.160408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.160452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.160625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.160653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.160898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.160961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.161257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.161292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.161505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.161535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.161720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.161754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.161954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.162017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.162270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.162305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.162525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.162554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.162747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.162782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.162962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.163027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.163239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.163274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.163511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.163541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.163755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.163803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.164040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.164104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.164333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.164368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.164598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.164627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.164843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.164878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.165159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.165223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.165475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.165522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.165737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.165765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.166048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.166083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.166327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.166390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.166702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.166750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.166990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.167018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.167235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.167270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.167577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.703 [2024-07-26 11:37:26.167606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.703 qpair failed and we were unable to recover it. 00:29:30.703 [2024-07-26 11:37:26.167882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.167947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.168235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.168263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.168464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.168515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.168710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.168774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.169076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.169110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.169386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.169414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.169684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.169720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.170033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.170096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.170401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.170528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.170806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.170882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.171172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.171207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.171460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.171516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.171693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.171740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.172008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.172037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.172258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.172293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.172539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.172568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.172777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.172812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.173066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.173095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.173301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.173336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.173559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.173587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.173743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.173778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.173969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.173998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.174209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.174244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.174490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.174519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.174726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.174754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.175054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.175083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.175379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.175414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.175651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.175680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.175965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.176000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.176320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.176348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.176680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.176724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.177035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.177100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.177404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.177496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.177749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.177778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.178076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.178111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.178408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.178491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.178705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.178753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.704 [2024-07-26 11:37:26.179023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.704 [2024-07-26 11:37:26.179051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.704 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.179362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.179426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.179675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.179703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.180005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.180039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.180344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.180372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.180695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.180768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.181089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.181153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.181499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.181528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.181713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.181741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.181958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.181992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.182249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.182313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.182654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.182683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.182969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.182998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.183194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.183228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.183476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.183532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.183803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.183838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.184172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.184200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.184502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.184530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.184732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.184795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.185030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.185065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.185304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.185368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.185685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.185714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.185925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.185989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.186279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.186314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.186588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.186617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.186838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.186874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.187212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.187245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.187482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.187530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.187760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.187789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.187975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.188004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.188222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.188286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.188560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.188589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.188821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.705 [2024-07-26 11:37:26.188849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.705 qpair failed and we were unable to recover it. 00:29:30.705 [2024-07-26 11:37:26.189208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.189282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.189588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.189617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.189808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.189843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.190030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.190058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.190277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.190311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.190580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.190609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.190840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.190875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.191123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.191151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.191345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.191408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.191660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.191688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.191887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.191922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.192139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.192167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.192357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.192397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.192664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.192693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.192902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.192937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.193201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.193229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.193410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.193470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.193657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.193686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.193898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.193933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.194208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.194236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.194511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.194539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.194758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.194821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.195141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.195176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.195518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.195546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.195767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.195801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.196107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.196171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.196489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.196524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.196799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.196874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.197195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.197248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.197576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.197604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.197848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.197919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.198222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.198250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.198494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.198523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.198781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.198845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.199126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.199161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.199383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.199411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.199629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.706 [2024-07-26 11:37:26.199658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.706 qpair failed and we were unable to recover it. 00:29:30.706 [2024-07-26 11:37:26.199940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.200002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.200282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.200317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.200565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.200594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.200777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.200811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.201017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.201081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.201373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.201408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.201622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.201650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.201841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.201875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.202083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.202145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.202484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.202520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.202849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.202914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.203229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.203263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.203548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.203577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.203913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.203977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.204301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.204347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.204657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.204691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.204996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.205060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.205369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.205404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.205712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.205740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.205987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.206021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.206237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.206301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.206561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.206590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.206758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.206787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.206996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.207031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.207256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.207319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.207622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.207651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.207858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.207887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.208192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.208227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.208498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.208527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.208701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.208729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.208924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.208952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.209201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.209236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.209518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.209547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.209761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.209796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.210017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.210045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.210307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.210367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.210644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.210673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.707 [2024-07-26 11:37:26.210871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.707 [2024-07-26 11:37:26.210906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.707 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.211144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.211172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.211352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.211387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.211625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.211655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.211830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.211865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.212104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.212133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.212354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.212389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.212617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.212647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.212862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.212898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.213117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.213146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.213355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.213389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.213594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.213623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.213814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.213849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.214006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.214228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.214272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.214500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.214528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.214713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.214760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.214965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.214993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.215213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.215256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.215545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.215574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.215761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.215797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.215993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.216021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.216224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.216258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.216504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.216533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.216671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.216700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.216904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.216933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.217116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.217151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.217379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.217455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.217636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.217665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.217875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.217903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.218140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.218175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.218417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.218504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.218749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.218784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.219039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.219068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.219307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.219342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.219589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.219618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.219811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.219846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.220074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.708 [2024-07-26 11:37:26.220113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.708 qpair failed and we were unable to recover it. 00:29:30.708 [2024-07-26 11:37:26.220327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.220392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.220688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.221022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.221057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.221319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.221348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.221575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.221605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.221816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.221880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.222160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.222195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.222392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.222422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.222609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.222637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.222829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.222892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.223153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.223188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.223406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.223441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.223613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.223642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.223853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.223917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.224142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.224177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.224393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.224421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.224598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.224627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.224862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.224926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.225176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.225211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.225425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.225499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.225723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.225756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.226016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.226080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.226354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.226417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.226696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.226725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.226986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.227020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.227255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.227320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.227567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.227596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.227806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.227834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.228052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.228087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.228291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.228355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.228641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.228681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.228857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.228886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.229106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.229141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.229396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.229478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.229685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.229732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.229958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.229986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.230214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.230249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-07-26 11:37:26.230480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.709 [2024-07-26 11:37:26.230531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.230741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.230776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.231029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.231057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.231222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.231257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.231465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.231517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.231747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.231781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.232081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.232109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.232305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.232340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.232542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.232571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.232766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.232808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.232997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.233026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.233218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.233253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.233510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.233539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.233683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.233712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.233950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.233978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.234180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.234215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.234398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.234484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.234703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.234732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.234970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.235245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.235280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.235553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.235582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.235821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.235856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.236117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.236145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.236343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.236417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.236683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.236711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.236972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.237007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.237205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.237419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.237488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.237627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.237655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.237835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.237870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.238085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.238113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.238324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.238358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-07-26 11:37:26.238584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.710 [2024-07-26 11:37:26.238613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.238815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.238851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.239042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.239070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.239260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.239325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.239628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.239657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.239877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.239912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.240134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.240163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.240371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.240477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.240692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.240762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.241036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.241070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.241292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.241321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.241518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.241548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.241740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.241804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.242043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.242078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.242296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.242325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.242555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.242584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.242830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.242894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.243148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.243183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.243417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.243459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.243650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.243710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.243965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.244029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.244329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.244394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.244671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.244711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.244911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.244946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.245160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.245223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.245514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.245543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.245749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.245778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.246066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.246101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.246353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.246416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.246677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.246706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.246956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.246985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.247194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.247235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.247470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.247527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.247754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.247789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.247994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.248217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.248246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.248471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.248518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.248723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.711 [2024-07-26 11:37:26.248758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-07-26 11:37:26.248959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.248995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.249192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.249227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.249382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.249418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.249607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.249637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.249831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.249866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.250079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.250115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.250301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.250337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.250503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.250533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.250739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.250776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.250966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.251002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.251181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.251217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.251439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.251489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.251638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.251666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.251879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.251908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.252152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.252187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.252417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.252473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.252674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.252702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.252880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.252908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.253124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.253159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.253333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.253368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.253562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.253591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.253791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.253820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.254025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.254061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.254278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.254344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.254631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.254660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.254837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.254865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.255094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.255157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.255445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.255505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.255683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.255711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.255889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.255918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.256117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.256181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.256484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.256531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.256715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.256743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.256946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.256979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.257274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.257337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.261445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.261510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.261763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.261810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.712 [2024-07-26 11:37:26.262117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.712 [2024-07-26 11:37:26.262167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.712 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.262418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.262463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.262721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.262754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.262969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.262998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.263208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.263237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.263468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.263515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.263709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.263742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.263946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.263974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.264155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.264184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.264385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.264418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.264625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.264654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.264856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.264885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.265057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.265085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.265310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.265344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.265542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.265576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.265794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.265822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.266011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.266040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.266242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.266276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.266446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.266494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.266659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.266688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.266880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.266909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.267129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.267192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.267448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.267499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.267690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.267719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.267909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.267937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.268171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.268235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.268476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.268523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.268713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.268741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.268899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.268927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.269128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.269193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.269461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.269507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.269678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.269706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.269867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.269895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.270073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.270136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.270398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.270441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.270641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.270670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.270855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.270883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.713 [2024-07-26 11:37:26.271145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.713 [2024-07-26 11:37:26.271209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.713 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.271486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.271515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.271730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.271758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.272035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.272064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.272309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.272372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.272673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.272702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.272973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.273001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.273210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.273238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.273458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.273520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.273750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.273785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.274021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.274050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.274266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.274295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.274570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.274599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.274814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.274849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.275106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.275144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.275320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.275348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.275520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.275550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.275739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.275774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.275990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.276018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.276195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.276224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.276443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.276511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.276682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.276710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.276943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.276971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.277151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.277180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.277384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.277462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.277676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.277705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.277900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.277933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.278121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.278150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.278356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.278421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.278651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.278679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.278857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.278886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.279057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.279085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.279255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.279319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.279581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.279611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.279797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.279826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.280031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.280059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.280344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.280408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.280664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.714 [2024-07-26 11:37:26.280693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.714 qpair failed and we were unable to recover it. 00:29:30.714 [2024-07-26 11:37:26.280928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.280956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.281161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.281189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.281415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.281504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.281704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.281749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.281961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.281990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.282180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.282209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.282475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.282529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.282748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.282783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.282977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.283005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.283212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.283240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.283506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.283535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.283760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.283795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.284078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.284106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.284358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.284386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.284581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.284610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.284828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.284863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.285065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.285093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.285267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.285295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.285487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.285516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.285732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.285777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.286029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.286057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.286236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.286265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.286505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.286555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.286782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.286816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.287103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.287132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.287321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.287350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.287567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.287596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.287798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.287833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.288064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.288097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.288365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.288393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.288593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.288623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.288817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.715 [2024-07-26 11:37:26.288851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.715 qpair failed and we were unable to recover it. 00:29:30.715 [2024-07-26 11:37:26.289025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.289054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.289240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.289269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.289462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.289522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.289709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.289758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.289960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.289988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.290142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.290170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.290362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.290445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.290624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.290653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.291766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.291841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.292106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.292135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.292362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.292448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.292641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.292670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.292851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.292880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.293037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.293066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.293281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.293346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.293586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.293615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.293828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.293857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.294120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.294149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.294346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.294411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.294734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.294770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.295043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.295072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.295227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.295256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.295411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.295511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.295731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.295766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.295995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.296023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.296229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.296258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.296509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.296538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.296682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.296710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.296949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.296978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.297158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.297187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.297366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.297395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.297549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.297578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.297767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.297796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.298000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.298028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.298211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.298240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.298420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.298463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.716 [2024-07-26 11:37:26.298633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.716 [2024-07-26 11:37:26.298666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.716 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.298854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.298882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.299074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.299102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.299308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.299337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.299563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.299593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.299781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.299809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.300018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.300046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.300225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.300254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.300486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.300515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.300653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.300681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.300862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.300926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.301199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.301233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.301477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.301506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.301689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.301718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.301940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.302005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.302303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.302338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.302586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.302616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.302821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.302849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.303116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.303181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.303491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.303520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.303707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.303735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.303921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.303950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.304154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.304218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.304503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.304531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.304711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.304740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.304955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.304983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.305240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.305304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.305571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.305601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.305824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.305852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.306144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.306173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.306401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.306485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.306677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.306726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.307010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.307039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.307285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.307314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.307544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.307573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.307784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.307819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.717 [2024-07-26 11:37:26.308061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.717 [2024-07-26 11:37:26.308090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.717 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.308297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.308326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.308569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.308598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.308827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.308862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.309153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.309187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.309419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.309492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.309653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.309687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.309873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.309908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.310124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.310152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.310367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.310396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.310669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.310698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.310913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.310948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.311141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.311169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.311383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.311411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.311676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.311743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.312025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.312059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.312278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.312307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.312507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.312536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.312739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.312803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.313057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.313093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.313312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.313341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.313499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.313528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.313713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.313776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.314058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.314093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.314321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.314349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.314566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.314595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.314780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.314844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.315135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.315171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.315400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.315486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.315678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.315706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.315953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.316017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.316307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.316342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.316544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.316573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.316783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.316812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.317067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.317443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.317492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.317673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.317701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.317917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.317945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.718 qpair failed and we were unable to recover it. 00:29:30.718 [2024-07-26 11:37:26.318215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.718 [2024-07-26 11:37:26.318280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.318588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.318617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.318812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.318841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.319019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.319047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.319232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.319296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.319586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.319615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.319796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.319828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.319982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.320010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.320231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.320295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.320554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.320583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.320765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.320793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.321012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.321077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.321364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.321455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.321686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.321715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.321854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.321882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.322125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.322188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.322482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.322510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.322707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.322741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.322950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.322978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.323157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.323185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.323442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.323513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.323691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.323737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.323957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.323985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.324169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.324198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.324460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.324523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.324746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.324780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.325047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.325075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.325251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.325280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.325505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.325534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.325738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.325771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.325973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.326001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.719 [2024-07-26 11:37:26.326180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.719 [2024-07-26 11:37:26.326208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.719 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.326469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.326502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.326741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.326777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.327073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.327101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.327276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.327304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.327509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.327538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.327754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.327788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.327962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.327991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.328179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.328208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.328441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.328507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.328707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.328751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.328958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.328987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.329194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.329223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.329476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.329505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.329662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.329690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:30.720 [2024-07-26 11:37:26.329873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.720 [2024-07-26 11:37:26.329906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:30.720 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.330117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.330146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.330394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.330483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.330717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.330767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.331048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.331077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.331281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.331310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.331578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.331607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.331820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.331853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.000 [2024-07-26 11:37:26.332046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.000 [2024-07-26 11:37:26.332075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.000 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.332254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.332283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.332508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.332537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.332692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.332741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.332930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.332959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.333135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.333163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.333402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.333507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.333687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.333733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.333944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.333972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.334151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.334180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.334371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.334404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.334675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.334704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.334933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.334962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.335169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.335198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.335416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.335495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.335697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.335746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.335965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.335993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.336168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.336207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.336416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.336506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.336690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.336897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.336925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.337140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.337168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.337460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.337523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.337702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.337750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.337946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.337974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.338163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.338191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.338385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.338466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.338683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.338711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.338932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.338960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.339141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.339170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.339407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.339496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.339741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.339776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.340074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.340108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.340371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.340452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.340638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.340667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.340857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.340891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.341099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.341128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.341298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.001 [2024-07-26 11:37:26.341326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.001 qpair failed and we were unable to recover it. 00:29:31.001 [2024-07-26 11:37:26.341526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.341555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.341742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.341778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.341993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.342022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.342203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.342231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.342473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.342531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.342747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.342782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.343012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.343041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.343223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.343251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.343452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.343518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.343739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.343774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.343972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.344001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.344203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.344232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.344459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.344522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.344707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.344759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.345004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.345033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.345238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.345267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.345529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.345558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.345738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.345774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.345991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.346020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.346196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.346225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.346420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.346526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.346742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.346778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.346983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.347011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.347191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.347220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.347465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.347528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.347705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.347753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.347930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.347959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.348100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.348128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.348309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.348373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.348666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.348694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.348932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.348960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.349167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.349196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.349459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.349506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.349746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.349799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.350082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.350115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.350326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.350355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.350663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.350692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.002 [2024-07-26 11:37:26.350842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.002 [2024-07-26 11:37:26.350877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.002 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.351070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.351098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.351297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.351326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.351586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.351615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.351831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.351867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.352036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.352065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.352275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.352576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.352605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.352783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.352818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.353042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.353070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.353283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.353312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.353577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.353606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.353793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.353828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.353983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.354011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.354218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.354246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.354471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.354537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.354821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.354856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.355092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.355120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.355323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.355352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.355612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.355642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.355808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.355842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.356063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.356092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.356298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.356326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.356577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.356606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.356800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.356836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.357057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.357085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.357264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.357293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.357516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.357545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.357754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.357803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.358054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.358082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.358258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.358287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.358499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.358528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.358682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.358710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.358924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.358953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.359170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.359199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.359498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.359527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.359688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.359716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.003 [2024-07-26 11:37:26.359947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.003 [2024-07-26 11:37:26.359980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.003 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.360186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.360484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.360513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.360691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.360738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.360934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.360963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.361145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.361173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.361393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.361475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.361771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.361806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.361955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.361983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.362178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.362206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.362480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.362546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.362816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.362852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.363036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.363065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.363248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.363276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.363510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.363576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.363829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.363864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.364044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.364073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.364211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.364240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.364399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.364488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.364658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.364687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.364893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.364922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.365128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.365156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.365398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.365477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.365765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.365799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.366004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.366032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.366251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.366280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.366572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.366601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.366778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.366828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.367051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.367079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.367261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.367290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.367500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.367529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.367742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.368056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.368085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.368255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.368283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.368494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.368523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.368674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.368722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.368916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.368944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.369115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.004 [2024-07-26 11:37:26.369143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.004 qpair failed and we were unable to recover it. 00:29:31.004 [2024-07-26 11:37:26.369323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.369386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.369673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.369702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.369891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.369924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.370075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.370103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.370306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.370371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.370638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.370667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.370847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.370876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.371125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.371188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.371465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.371520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.371711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.371747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.371992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.372056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.372321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.372385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.372686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.372715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.373015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.373050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.373254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.373283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.373475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.373505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.373727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.373791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.374069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.374104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.374288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.374316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.374489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.374519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.374749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.374813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.375096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.375131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.375369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.375398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.375611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.375640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.375877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.375942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.376224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.376259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.376451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.376480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.376651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.376679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.376850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.376914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.377168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.377203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.005 [2024-07-26 11:37:26.377438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.005 [2024-07-26 11:37:26.377467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.005 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.377657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.377686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.377915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.377979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.378235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.378270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.378491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.378520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.378737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.378765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.378970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.378998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.379216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.379244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cb4000b90 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.379488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.379535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.379753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.379784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.379971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.380021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.380195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.380241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.380392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.380421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.380619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.380647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.380856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.380883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.381098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.381144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.381381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.381436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.381621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.381649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.381851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.381901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.382091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.382137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.382318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.382346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.382551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.382579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.382774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.382831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.382993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.383040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.383208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.383254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.383474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.383503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.383739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.383791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.383950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.383997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.384187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.384239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.384391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.384418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.384630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.384657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.384864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.384910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.385135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.385184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.385404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.385437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.385620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.385648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.385851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.385897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.386113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.386161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.386369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.386397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.006 [2024-07-26 11:37:26.386590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.006 [2024-07-26 11:37:26.386618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.006 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.386820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.386866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.387070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.387119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.387326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.387355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.387537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.387565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.387739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.387784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.387980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.388032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.388230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.388275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.388503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.388532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.388721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.388767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.388989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.389040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.389260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.389288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.389450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.389479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.389672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.389717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.389939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.389992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.390148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.390202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.390402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.390435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.390628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.390674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.390861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.390913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.391132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.391177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.391383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.391411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.391608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.391654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.391871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.391920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.392112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.392158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.392360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.392389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.392593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.392638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.392858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.392909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.393116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.393162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.393370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.393398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.393590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.393637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.393832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.393883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.394080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.394126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.394348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.394376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.394551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.394598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.394793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.394846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.395035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.395081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2235626 Killed "${NVMF_APP[@]}" "$@" 00:29:31.007 [2024-07-26 11:37:26.395236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.395264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 [2024-07-26 11:37:26.395459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.007 [2024-07-26 11:37:26.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.007 qpair failed and we were unable to recover it. 00:29:31.007 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:31.008 [2024-07-26 11:37:26.395698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.395765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.395976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:31.008 [2024-07-26 11:37:26.396022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.396203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.396250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.008 [2024-07-26 11:37:26.396441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.396469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.008 [2024-07-26 11:37:26.396641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.396669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.008 [2024-07-26 11:37:26.396848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.396893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.397091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.397140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.397319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.397347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.397532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.397560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.397767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.397814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.398036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.398087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.398267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.398295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.398487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.398522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.398731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.398777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.398986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.399036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.399250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.399301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.399497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.399531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.399729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.399775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.399954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.400004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.400211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.400239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.400426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.400461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.400642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.400690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.400915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.400965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.401155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.401201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.401391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.401418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.401641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.401690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2236181 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2236181 00:29:31.008 [2024-07-26 11:37:26.401870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.401920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2236181 ']' 00:29:31.008 [2024-07-26 11:37:26.402122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.402169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.008 [2024-07-26 11:37:26.402331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.402361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.008 [2024-07-26 11:37:26.402498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.402528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.008 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.008 [2024-07-26 11:37:26.402746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.402797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.008 [2024-07-26 11:37:26.402980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.008 [2024-07-26 11:37:26.403026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.008 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.403203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.403251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.403414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.403449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.403640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.403668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.403863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.403909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.404090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.404143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.404309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.404337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.404538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.404573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.404798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.404845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.405023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.405077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.405204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.405233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.405425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.405462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.405621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.405668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.405884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.405945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.406143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.406189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.406345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.406373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.406541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.406570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.406719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.406771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.406926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.406972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.407140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.407189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.407351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.407384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.407532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.407561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.407700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.407728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.407885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.407913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.408047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.408075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.408232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.408261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.408449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.408478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.408654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.408701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.408873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.408918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.409097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.409152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.409345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.409373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.409544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.409593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.409794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.409842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.410008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.410059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.410262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.410290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.410462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.410491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.410646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.410693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.410877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.410930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.009 [2024-07-26 11:37:26.411127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.009 [2024-07-26 11:37:26.411174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.009 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.411353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.411381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.411534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.411582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.411777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.411830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.412000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.412045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.412209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.412238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.412418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.412454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.412606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.412653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.412825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.412872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.413046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.413268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.413296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.413479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.413514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.413710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.413761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.413969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.414019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.414153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.414181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.414348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.414376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.414533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.414580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.414758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.414812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.415010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.415057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.415249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.415277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.415461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.415506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.415708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.415768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.415962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.416010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.416222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.416289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.416455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.416483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.416644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.416693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.416895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.416928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.417110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.417164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.417324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.417351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.417501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.417548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.417725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.417771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.417913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.417965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.010 qpair failed and we were unable to recover it. 00:29:31.010 [2024-07-26 11:37:26.418125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.010 [2024-07-26 11:37:26.418153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.418313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.418340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.418492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.418539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.418679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.418706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.418893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.418921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.419099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.419152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.419313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.419340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.419517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.419569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.419733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.419780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.419959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.420010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.420201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.420228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.420402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.420437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.420586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.420633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.420816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.420868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.421031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.421077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.421267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.421294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.421451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.421480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.421666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.421712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.421864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.422085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.422138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.422331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.422358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.422546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.422581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.422769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.422817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.423023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.423076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.423251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.423279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.423472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.423520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.423679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.423724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.423933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.423983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.424145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.424173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.424334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.424362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.424540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.424589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.424764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.424792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.424976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.425023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.425211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.425262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.425447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.425475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.425607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.425635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.425829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.425874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.426051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.426103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.011 qpair failed and we were unable to recover it. 00:29:31.011 [2024-07-26 11:37:26.426244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.011 [2024-07-26 11:37:26.426272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.426408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.426445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.426615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.426643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.426816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.426869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.427039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.427085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.427249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.427276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.427485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.427514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.427697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.427743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.427924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.427971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.428123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.428175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.428367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.428395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.428575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.428622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.428798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.428844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.429016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.429065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.429254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.429282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.429507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.429541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.429702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.429748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.429921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.429973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.430148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.430175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.430347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.430375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.430552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.430597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.430811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.430866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.431054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.431100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.431289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.431317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.431489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.431539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.431739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.431789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.431986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.432032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.432173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.432201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.432388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.432416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.432608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.432655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.432825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.432871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.433078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.433130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.433303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.433331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.433498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.433546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.433699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.433746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.433923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.433974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.434157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.434203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.434393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.434420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.012 [2024-07-26 11:37:26.434584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.012 [2024-07-26 11:37:26.434630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.012 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.434825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.434876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.435025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.435072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.435236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.435264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.435437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.435466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.435626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.435673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.435870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.435916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.436100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.436153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.436285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.436313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.436505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.436553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.436742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.436770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.436980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.437032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.437204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.437232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.437422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.437457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.437646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.437674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.437883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.437933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.438111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.438158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.438295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.438323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.438480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.438515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.438733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.438787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.438968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.439014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.439196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.439250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.439440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.439468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.439608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.439635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.439817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.439865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.440052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.440100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.440286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.440314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.440489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.440524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.440760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.440812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.440990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.441041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.441201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.441229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.441403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.441453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.441627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.441675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.441848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.441903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.442075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.442121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.442293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.442320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.442528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.442575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.442742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.442795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.013 [2024-07-26 11:37:26.443001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.013 [2024-07-26 11:37:26.443047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.013 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.443186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.443214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.443355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.443383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.443590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.443638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.443799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.443826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.444012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.444066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.444198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.444226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.444416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.444453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.444620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.444666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.444840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.444891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.445038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.445085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.445209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.445236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.445398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.445426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.445644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.445697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.445873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.445920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.446086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.446138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.446305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.446333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.446494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.446544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.446714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.446759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.446972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.447025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.447221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.447267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.447404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.447444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.447649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.447697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.447885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.447936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.448103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.448146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.448310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.448338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.448536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.448583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.448766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.448819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.448996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.449042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.449221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.449249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.449446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.449475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.449608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.449655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.449807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.449855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.014 [2024-07-26 11:37:26.450036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.014 [2024-07-26 11:37:26.450088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.014 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.450266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.450294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.450483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.450518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.450703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.450747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.450947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.450999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.451141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.451188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.451375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.451403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.451588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.451636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.451821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.451871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.452075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.452121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.452279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.452307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.452488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.452537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.452723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.452779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.452952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.452998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.453165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.453193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.453332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.453360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.453561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.453607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.453787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.453833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.454017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.454065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.454228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.454255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.454414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.454450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.454649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.454698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.454881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.454930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.455099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.455146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.455314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.455342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.455506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.455554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.455616] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:31.015 [2024-07-26 11:37:26.455725] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.015 [2024-07-26 11:37:26.455761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.455814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.456012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.456057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.456245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.456272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.456440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.456487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.456699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.456760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.456952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.456999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.457155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.457205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.457393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.457421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.457641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.457694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.457824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.457871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.458046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.458102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.015 qpair failed and we were unable to recover it. 00:29:31.015 [2024-07-26 11:37:26.458246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.015 [2024-07-26 11:37:26.458274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.458480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.458526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.458701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.458746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.458903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.459119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.459166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.459338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.459366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.459544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.459591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.459770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.459816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.459996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.460044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.460213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.460242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.460495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.460524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.460728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.460780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.460913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.460963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.461151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.461207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.461371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.461400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.461590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.461639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.461843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.461890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.462058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.462108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.462247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.462275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.462456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.462509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.462684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.462731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.462890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.462942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.463144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.463191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.463383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.463411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.463595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.463641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.463846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.463901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.464087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.464134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.464294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.464322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.464492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.464543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.464723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.464783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.464946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.464993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.465169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.465222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.465380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.465408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.465603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.465651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.465823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.466019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.466090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.466287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.466315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.466493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.466533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.016 qpair failed and we were unable to recover it. 00:29:31.016 [2024-07-26 11:37:26.466699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.016 [2024-07-26 11:37:26.466746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.466999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.467058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.467294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.467322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.467559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.467610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.467780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.467827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.468014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.468065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.468253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.468281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.468505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.468534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.468778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.468827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.469007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.469058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.469221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.469249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.469390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.469419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.469558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.469604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.469784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.469839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.470083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.470127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.470291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.470319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.470496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.470544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.470723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.470778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.470970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.471016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.471210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.471238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.471397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.471425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.471604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.471652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.471864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.471911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.472085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.472136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.472327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.472355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.472550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.472602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.472807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.472851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.473062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.473115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.473277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.473305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.473503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.473553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.473726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.473772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.473970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.474020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.474189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.474217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.474459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.474487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.474659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.474706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.474882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.474931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.475124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.475168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.017 qpair failed and we were unable to recover it. 00:29:31.017 [2024-07-26 11:37:26.475332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.017 [2024-07-26 11:37:26.475360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.475557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.475604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.475792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.475847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.476034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.476089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.476269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.476298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.476479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.476515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.476720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.476774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.476952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.476998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.477185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.477213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.477380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.477408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.477590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.477637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.477783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.477830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.478010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.478059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.478222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.478249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.478442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.478471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.478649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.478695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.478909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.478959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.479136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.479180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.479339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.479366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.479558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.479587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.479800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.479851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.480028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.480076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.480222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.480250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.480412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.480449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.480615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.480643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.480845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.480892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.481075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.481126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.481272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.481300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.481545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.481592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.481785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.481832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.482048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.482106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.482247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.482274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.482464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.482519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.482679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.482726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.018 qpair failed and we were unable to recover it. 00:29:31.018 [2024-07-26 11:37:26.482904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.018 [2024-07-26 11:37:26.482954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.483135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.483182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.483351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.483378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.483554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.483589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.483787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.483844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.484045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.484092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.484285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.484312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.484504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.484552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.484726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.484774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.484945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.484990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.485181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.485235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.485426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.485462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.485627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.485675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.485839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.485885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.486092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.486144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.486285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.486313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.486558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.486604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.486771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.486818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.486991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.487044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.487233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.487261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.487424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.487460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.487677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.487726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.487940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.487998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.488149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.488195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.488382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.488411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.488585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.488633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.488811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.488875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.489072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.489118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.489301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.489329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.489499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.489546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.489777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.489830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.019 [2024-07-26 11:37:26.489981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.019 [2024-07-26 11:37:26.490028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.019 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.490216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.490269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.490464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.490493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.490673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.490719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.490851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.490898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.491112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.491169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.491336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.491368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.491574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.491621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.491829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.491874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.492044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.492095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.492254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.492282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.492504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.492539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.492762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.492809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.492996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.493046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.493208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.493236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.493399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.493433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.493590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.493636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.493812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.493862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.494035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.494081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.494241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.494269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.494477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.494525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.494726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.494779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.494939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.494983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.495108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.495136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.495321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.495349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.495493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.495543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.495691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.495737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.495914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.495964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.496129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.496157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.496293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.496321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.496472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.496506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.496730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.496780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.496940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.496985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.497174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.020 [2024-07-26 11:37:26.497206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.020 qpair failed and we were unable to recover it. 00:29:31.020 [2024-07-26 11:37:26.497410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.497454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.497623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.497669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.497867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.497914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.498061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.498111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.498251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.498278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.498477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.498524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.498692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.498738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.498914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.498966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.499102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.499129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.499267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.499294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.499485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.499534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.499719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.499778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.499953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.499999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.500187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.500215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.500380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.500407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.500577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.500623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.500791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.500839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.501054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.501114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.501254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.501282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.501414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.501520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.501658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.501704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.501898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.501926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.502082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.502110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.502273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.502301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.502462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.502491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.502661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.502707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.502875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.502922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.503095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.503123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.503290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.503318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.503513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.503560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.503730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.503777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.503986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.504037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.504279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.504317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.504510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.504539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.504758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.504786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.504970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.504999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.505206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.505234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.505441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.021 [2024-07-26 11:37:26.505479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.021 qpair failed and we were unable to recover it. 00:29:31.021 [2024-07-26 11:37:26.505692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.505738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.505937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.505989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.506192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.506238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.506445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.506484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.506669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.506717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.506929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.506981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.507235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.507285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.507494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.507521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.507746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.507792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.508011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.508062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.508256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.508303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.508501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.508550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.508718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.508763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.508964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.509013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.509201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.509246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.509425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.509469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.509651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.509698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.509884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.509934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.510101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.510148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.510320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.510347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.510534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.510581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.510754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.510803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.511023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.511069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.511248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.511276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.511426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.511499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.511731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.511781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.511998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.512045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.512260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.512309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.512523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.512570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.512765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.512828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.513032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.513079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.513261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.513288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.513445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.513474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.513751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.022 [2024-07-26 11:37:26.513810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.022 qpair failed and we were unable to recover it. 00:29:31.022 [2024-07-26 11:37:26.514023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.514069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.514248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.514275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.514485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.514532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.514731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.514782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.514974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.515019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.515242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.515269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.515406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.515439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.515654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.515708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.515910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.515957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.516152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.516202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.516378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.516406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.516636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.516688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.516892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.516938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.517137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.517188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.517393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.517421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.517632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.517680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.517873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.517919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.518109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.518156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.518349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.518377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.518565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.518613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.518845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.518893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.519092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.519138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.519312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.519340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.519527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.519574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.519796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.519843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.520035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.520083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.520288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.520316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.520549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.520584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.520843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.520891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.521081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.521129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.521343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.521379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.521598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.521644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.521868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.521913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.522117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.522169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.522372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.522567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.522615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c1ea0 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.522880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.522937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.023 [2024-07-26 11:37:26.523192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.023 [2024-07-26 11:37:26.523252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.023 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.523534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.523565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.523822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.523857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.524056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.524113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.524318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.524375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.524610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.524639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.524858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.524914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.525152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.525207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.525437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.525484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.525673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.525701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.525938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.525993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.526169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.526230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.526450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.526495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.526729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.526757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.527018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.527071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.527312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.527577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.527605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.527787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.527814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.528054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.528109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.528342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.528376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.528633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.528678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.528911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.528939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.529162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.529215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.529444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.529473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.529660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.529709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.529931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.529958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.024 [2024-07-26 11:37:26.530185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.024 [2024-07-26 11:37:26.530239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.024 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.530465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.530494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.530693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.530728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.530894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.530921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.531140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.531197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.531418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.531459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.531687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.531721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.531957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.531985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.532208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.532264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.532493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.532521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.532737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.532794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.533025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.533078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.533278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.533312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.533546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.533579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.533779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.533838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.534075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.534130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.534290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.534325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.534560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.534589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.534754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.534811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.535047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.535102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.535319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.535353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.535549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.535577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.535802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.535864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.536105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.536160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.536385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.536419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.536666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.536694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.536917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.536972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.537182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.537239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.537481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.537509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.537726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.537755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.537999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.538055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.538279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.538307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.538517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.538546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.538748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.538775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.538977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.539033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.539244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.539272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.539498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.539526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.539709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.025 [2024-07-26 11:37:26.539737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.025 qpair failed and we were unable to recover it. 00:29:31.025 [2024-07-26 11:37:26.539957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.540011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.540199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.540227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.540378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.540412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.540654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.540682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.540919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.540976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.541187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.541215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.541385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.541419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.541704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.541749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.541995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.542049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.542270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.542298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.542499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.542528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.542737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.542765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.543013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.543065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.543279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.543307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.543522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.543567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.543778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.543810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.544055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.544112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.544377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.544404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.544641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.544669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.544867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.544895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.545127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.545180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.545405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.545437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.545643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.545689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.545907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.545935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.546106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.546162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.546346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.546373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.546573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.546601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.546797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.546825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.547080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.547135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.547362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.547397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.547619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.547648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.547836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.547864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.548107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.548161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.548315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.548343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.548551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.548579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.548719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.548747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.548800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.026 [2024-07-26 11:37:26.548969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.026 [2024-07-26 11:37:26.549024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.026 qpair failed and we were unable to recover it. 00:29:31.026 [2024-07-26 11:37:26.549254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.549282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.549496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.549530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.549741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.549769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.550013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.550068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.550295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.550323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.550509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.550538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.550691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.550718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.550880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.550937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.551169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.551197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.551423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.551463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.551660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.551688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.551907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.551963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.552156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.552184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.552409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.552450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.552678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.552707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.552954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.553009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.553212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.553240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.553421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.553476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.553666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.553694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.553920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.553977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.554243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.554271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.554516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.554545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.554758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.554787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.555010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.555064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.555322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.555350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.555575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.555604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.555791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.555818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.556028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.556084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.556276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.556304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.556549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.556577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.556779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.556807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.557049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.557109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.557308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.557336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.557550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.557579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.557760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.557788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.558000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.558055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.558279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.558307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.027 qpair failed and we were unable to recover it. 00:29:31.027 [2024-07-26 11:37:26.558513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.027 [2024-07-26 11:37:26.558542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.558722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.558750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.558923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.558981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.559217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.559245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.559410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.559451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.559651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.559679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.559913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.559968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.560186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.560214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.560401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.560441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.560649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.560677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.560906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.560961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.561184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.561212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.561465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.561511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.561720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.561748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.561996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.562051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.562270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.562298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.562513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.562542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.562767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.562795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.563036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.563089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.563313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.563341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.563545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.563573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.563768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.563797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.563975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.564032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.564198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.564226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.564415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.564456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.564688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.564716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.564945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.565004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.565223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.565251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.565488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.565517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.565671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.565698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.565929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.565983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.566184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.566212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.566412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.566471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.566682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.566710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.566916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.566977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.567182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.567210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.567389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.567422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.567594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.028 [2024-07-26 11:37:26.567621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.028 qpair failed and we were unable to recover it. 00:29:31.028 [2024-07-26 11:37:26.567832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.567889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.568107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.568134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.568324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.568358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.568593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.568621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.568828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.568884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.569118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.569145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.569353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.569388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.569562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.569590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.569779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.569834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.570029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.570057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.570287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.570341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.570538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.570567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.570749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.570808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.571027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.571055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.571256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.571290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.571464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.571492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.571676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.571721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.571889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.571916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.572101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.572154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.572359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.572387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.572541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.572569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.572748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.572775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.572986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.573042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.573255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.573282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.573521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.573549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.573758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.573785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.573999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.574052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.574259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.574286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.574517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.574544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.574731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.574758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.575000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.029 [2024-07-26 11:37:26.575054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.029 qpair failed and we were unable to recover it. 00:29:31.029 [2024-07-26 11:37:26.575245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.575271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.575414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.575454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.575681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.575709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.575926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.575982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.576204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.576232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.576454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.576507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.576690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.576718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.576907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.576961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.577193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.577220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.577445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.577494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.577681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.577708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.577917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.577973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.578159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.578186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.578380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.578413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.578637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.578664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.578892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.578944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.579161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.579188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.579399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.579437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.579676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.579703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.579924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.579977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.580248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.580276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.580508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.580536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.580748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.580775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.581001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.581056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.581270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.581298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.581506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.581533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.581730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.581756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.581965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.582019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.582212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.582238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.582435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.582470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.582662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.582689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.582905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.582961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.583188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.583216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.583421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.583461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.583696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.583723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.583913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.583969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.584165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.030 [2024-07-26 11:37:26.584193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.030 qpair failed and we were unable to recover it. 00:29:31.030 [2024-07-26 11:37:26.584424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.584492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.584673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.584701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.584896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.584952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.585122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.585148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.585355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.585389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.585607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.585634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.585849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.585905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.586102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.586130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.586344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.586384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.586618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.586646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.586864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.586920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.587099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.587127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.587341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.587374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.587583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.587611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.587809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.587869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.588103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.588130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.588318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.588352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.588569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.588598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.588771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.588828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.589052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.589078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.589261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.589294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.589496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.589525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.589760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.589813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.590028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.590055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.590262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.590295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.590472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.590500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.590696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.590758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.590987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.591015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.591232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.591466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.591493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.591698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.591726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.591956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.591983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.592208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.592263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.592473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.592501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.592700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.592761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.592994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.593022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.593205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.593239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.031 [2024-07-26 11:37:26.593440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.031 [2024-07-26 11:37:26.593467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.031 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.593620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.593647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.593883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.593910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.594104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.594159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.594378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.594406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.594572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.594600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.594814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.594842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.595036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.595088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.595275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.595302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.595528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.595576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.595757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.595784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.596008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.596069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.596291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.596319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.596525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.596554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.596757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.596785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.597005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.597059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.597261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.597288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.597515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.597544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.597749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.597776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.598018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.598071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.598259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.598286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.598517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.598545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.598722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.598749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.598911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.598944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.599153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.599181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.599392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.599426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.599667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.599695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.599884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.599940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.600138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.600165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.600323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.600357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.600579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.600608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.600828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.600884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.601104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.601131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.601312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.601346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.601571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.601599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.601817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.601872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.602087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.602115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.602330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.032 [2024-07-26 11:37:26.602364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.032 qpair failed and we were unable to recover it. 00:29:31.032 [2024-07-26 11:37:26.602602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.602631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.602817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.602872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.603090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.603117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.603300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.603334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.603560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.603588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.603752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.603807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.604020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.604048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.604259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.604293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.604524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.604552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.604764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.604819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.605017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.605045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.605228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.605262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.605465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.605511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.605695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.605727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.605933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.605961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.606146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.606200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.606439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.606467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.606712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.606766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.606985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.607012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.607203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.607257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.607481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.607510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.607700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.607977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.608005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.608205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.608261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.608454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.608482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.608634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.608678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.608903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.608931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.609147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.609202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.609423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.609456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.609632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.609675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.609894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.609921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.610139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.610194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.033 qpair failed and we were unable to recover it. 00:29:31.033 [2024-07-26 11:37:26.610412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.033 [2024-07-26 11:37:26.610445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.610688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.610721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.610921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.610949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.611192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.611247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.611457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.611485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.611693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.611727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.611947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.611974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.612177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.612231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.612453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.612498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.612681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.612709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.612945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.612972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.613209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.613262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.613491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.613519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.613754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.613809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.614004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.614031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.614261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.614314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.614485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.614513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.614707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.614768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.614983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.615011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.615191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.615246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.615442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.615470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.615625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.615658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.615855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.615882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.616084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.616138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.616358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.616385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.616578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.616606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.616826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.616853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.617062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.617116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.617305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.617331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.617520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.617548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.617725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.617753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.617970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.618023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.618244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.618272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.618422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.618476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.618681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.618709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.618925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.618981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.619183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.619211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.034 [2024-07-26 11:37:26.619408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.034 [2024-07-26 11:37:26.619459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.034 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.619665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.619692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.619887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.619941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.620159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.620185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.620391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.620423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.620635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.620662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.620878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.620932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.621153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.621181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.621375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.621408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.621649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.621678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.621861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.621916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.622116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.622144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.622332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.622366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.622584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.622612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.622846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.622902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.623135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.623163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.623383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.623417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.623671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.623699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.623914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.623968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.624190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.624217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.624406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.624449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.624651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.624679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.624868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.624923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.625102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.625130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.625297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.625335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.625537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.625565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.625744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.625800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.626008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.626036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.626255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.626317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.626524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.626552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.626772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.626828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.627022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.627049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.627258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.627292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.627506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.627535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.627737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.627792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.628008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.628036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.628217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.628250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.035 [2024-07-26 11:37:26.628503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.035 [2024-07-26 11:37:26.628530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.035 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.628682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.628709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.628946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.628974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.629155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.629211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.629405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.629440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.629652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.629700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.629888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.629916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.630137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.630192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.630412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.630446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.630636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.630685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.630890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.630917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.631117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.631173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.631407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.631456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.631620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.631649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.631860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.631888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.632126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.632182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.632409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.632444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.632652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.632700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.632874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.632900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.633117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.633172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.633361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.633389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.633545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.633572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.633780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.633808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.634018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.634074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.634266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.634293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.634507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.634535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.634720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.634748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.634988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.635050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.635265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.635292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.635498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.635526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.635735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.635763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.635971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.636030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.636232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.636259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.636470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.636500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.636717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.636745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.636961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.637013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.637252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.637279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.637514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.036 [2024-07-26 11:37:26.637542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.036 qpair failed and we were unable to recover it. 00:29:31.036 [2024-07-26 11:37:26.637753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.637781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.638025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.638058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.638283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.638311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.638479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.638536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.638706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.638733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.638960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.638995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.639223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.639251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.639398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.639449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.639693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.639720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.639957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.639989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.640205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.640231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.037 [2024-07-26 11:37:26.640412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.037 [2024-07-26 11:37:26.640453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.037 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.640680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.640708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.640915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.640943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.641187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.641215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.641436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.641470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.641655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.641687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.641882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.642131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.642159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.642368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.642400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.642615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.642644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.642872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.642905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.643088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.643115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.643328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.643359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.643552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.643579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.643729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.643761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.643969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.643996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.644160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.644192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.644417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.644467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.644659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.644707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.644939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.644967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.645190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.645245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.645442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.645470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.645696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.645730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.645922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.645949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.646150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.646204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.646380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.646408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.646626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.646654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.646892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.646920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.647117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.647170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.647387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.647415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.647660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.647707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.647934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.647962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.648191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.648245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.648472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.648500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.648704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.648739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.648919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-26 11:37:26.648955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.315 qpair failed and we were unable to recover it. 00:29:31.315 [2024-07-26 11:37:26.649119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.649173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.649401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.649442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.649670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.649698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.649845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.649872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.650061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.650117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.650340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.650368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.650555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.650584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.650735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.650763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.650946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.651000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.651187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.651218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.651412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.651452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.651640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.651667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.651855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.651910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.652143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.652171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.652342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.652386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.652624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.652652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.652903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.652959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.653182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.653210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.653393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.653436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.653676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.653703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.653917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.653969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.654202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.654230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.654416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.654457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.654637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.654665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.654833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.654889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.655075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.655102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.655278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.655327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.655532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.655560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.655779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.655835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.656050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.656286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.656319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.656513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.656541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.656766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.656819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.657029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.657056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.657209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.657243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.657417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.657450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.657633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.657678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.316 qpair failed and we were unable to recover it. 00:29:31.316 [2024-07-26 11:37:26.657914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.316 [2024-07-26 11:37:26.657942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.658148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.658203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.658415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.658448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.658654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.658702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.658902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.658930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.659079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.659134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.659355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.659383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.659621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.659649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.659855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.659882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.660072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.660127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.660327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.660355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.660505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.660534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.660705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.660737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.660902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.660964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.661194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.661221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.661418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.661474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.661657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.661684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.661894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.661947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.662164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.662192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.662410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.662451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.662613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.662650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.662856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.662911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.663106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.663133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.663317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.663351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.663583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.663612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.663798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.663853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.664055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.664083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.664277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.664311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.664516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.664544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.664747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.664802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.665032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.665060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.665233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.665266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.665494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.665521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.665752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.665785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.665972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.665999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.666200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.666255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.666474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.666502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.317 [2024-07-26 11:37:26.666719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.317 [2024-07-26 11:37:26.666774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.317 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.666992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.667020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.667227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.667283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.667505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.667533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.667724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.667773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.667975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.668002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.668177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.668233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.668418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.668453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.668643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.668688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.668889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.668915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.669123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.669178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.669370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.669397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.669585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.669612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.669816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.669844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.670067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.670123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.670337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.670372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.670557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.670585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.670729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.670757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.670991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.671046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.671278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.671305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.671495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.671523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.671733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.671761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.671945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.671999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.672192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.672220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.672442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.672490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.672662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.672689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.672893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.672947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.673154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.673181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.673393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.673426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.673669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.673696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.673905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.673960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.674176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.674203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.674398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.674440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.674645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.674672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.674895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.674949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.675172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.675200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.675387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.675421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.675645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.318 [2024-07-26 11:37:26.675672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.318 qpair failed and we were unable to recover it. 00:29:31.318 [2024-07-26 11:37:26.675863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.675920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.676149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.676177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.676409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.676464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.676692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.676720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.676945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.676998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.677221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.677248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.677403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.677446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.677688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.677716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.677903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.677956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.678147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.678175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.678366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.678400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.678648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.678676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.678849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.678902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.679111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.679138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.679275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.679308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.679528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.679556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.679759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.679814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.680030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.680063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.680277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.680310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.680514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.680542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.680740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.680800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.680964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.680991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.681191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.681247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.681473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.681502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.681703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.681764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.681953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.681980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.682167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.682201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.682390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.682605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.682631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.682806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.319 [2024-07-26 11:37:26.682833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.319 qpair failed and we were unable to recover it. 00:29:31.319 [2024-07-26 11:37:26.683057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.683114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.683354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.683382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.683614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.683643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.683851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.683879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.684100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.684156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.684400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.684434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.684611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.684639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.684852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.684880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.685065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.685119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.685337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.685365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.685549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.685578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.685794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.685822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.686023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.686081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.686302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.686336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.686555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.686583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.686793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.686820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.687032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.687085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.687298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.687325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.687557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.687584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.687802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.687829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.688085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.688157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.688391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.688419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.688672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.688716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.688885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.688912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.689128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.689186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.689400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.689435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.689640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.689682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.689888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.689920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.690098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.690153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.690369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.690396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.690584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.690611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.690766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.690794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.691001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.691057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.691282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.691310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.691493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.691521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.691707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.691734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.691955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.692010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.320 [2024-07-26 11:37:26.692208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.320 [2024-07-26 11:37:26.692235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.320 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.692435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.692481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.692660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.692687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.692875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.692931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.693126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.693154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.693346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.693380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.693615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.693643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.693828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.693883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.694104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.694132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.694318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.694351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.694566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.694594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.694829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.694883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.695101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.695129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.695282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.695315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.695541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.695570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.695780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.695838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.696085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.696112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.696355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.696390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.696575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.696603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.696827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.696881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.697099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.697127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.697327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.697361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.697553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.697581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.697806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.697860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.698060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.698088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.698239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.698272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.698459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.698512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.698731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.698791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.698952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.698979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.699218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.699273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.699520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.699553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.699766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.699828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.700029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.700056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.700278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.700311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.700523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.700552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.700732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.700790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.701011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.701039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.321 [2024-07-26 11:37:26.701248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.321 [2024-07-26 11:37:26.701283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.321 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.701509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.701537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.701702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.701729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.701977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.702005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.702217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.702251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.702445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.702473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.702672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.702699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.702914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.702942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.703154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.703181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.703351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.703379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.703558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.703586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.703769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.703796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.703980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.704007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.704214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.704242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.704436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.704464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.704651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.704679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.704841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.322 [2024-07-26 11:37:26.704858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.704905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.322 [2024-07-26 11:37:26.704911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 [2024-07-26 11:37:26.704925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.704942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.322 [2024-07-26 11:37:26.704956] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.322 [2024-07-26 11:37:26.705062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:31.322 [2024-07-26 11:37:26.705141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.705167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.705120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:31.322 [2024-07-26 11:37:26.705146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:31.322 [2024-07-26 11:37:26.705327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.705149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:31.322 [2024-07-26 11:37:26.705361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.705606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.705635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.705803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.705835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.706023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.706049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.706277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.706310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.706520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.706549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.706743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.706776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.706995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.707023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.707211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.707243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.707461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.707490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.707697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.707730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.707956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.707984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.708198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.708231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.708439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.708485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.708707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.708740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.708906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.708933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.709118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.322 [2024-07-26 11:37:26.709150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.322 qpair failed and we were unable to recover it. 00:29:31.322 [2024-07-26 11:37:26.709382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.709409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.709618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.709647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.709822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.709849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.710071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.710103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.710293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.710321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.710533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.710562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.710775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.710804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.711034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.711067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.711223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.711254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.711435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.711482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.711697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.711725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.711943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.711975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.712195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.712224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.712457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.712504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.712685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.712713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.712909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.712942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.713163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.713190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.713384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.713417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.713614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.713642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.713829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.713862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.714077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.714104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.714274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.714306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.714524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.714553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.714723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.714756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.714955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.714982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.715179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.715211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.715434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.715462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.715622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.715650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.715862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.715890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.716086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.716119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.716324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.716357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.716562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.716590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.716799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.716827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.717070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.717104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.717280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.717308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.717511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.717548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.717728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.717756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.717961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.323 [2024-07-26 11:37:26.717993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.323 qpair failed and we were unable to recover it. 00:29:31.323 [2024-07-26 11:37:26.718182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.718210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.718383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.718425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.718669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.718696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.718926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.718958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.719162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.719190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.719396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.719436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.719611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.719639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.719821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.719853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.720045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.720072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.720288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.720320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.720517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.720545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.720736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.720768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.721003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.721031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.721195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.721227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.721449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.721478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.721652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.721698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.721844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.721871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.722073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.722104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.722324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.722355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.722575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.722602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.722792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.722819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.723025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.723057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.723276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.723304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.723498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.723526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.723711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.723740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.723957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.723990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.724199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.724227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.724425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.724475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.724697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.724724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.724924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.724955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.725175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.725201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.725400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.324 [2024-07-26 11:37:26.725439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.324 qpair failed and we were unable to recover it. 00:29:31.324 [2024-07-26 11:37:26.725669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.725696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.725904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.725937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.726162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.726189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.726380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.726413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.726613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.726642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.726832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.726870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.727053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.727081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.727277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.727308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.727506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.727534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.727715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.727747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.727957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.727985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.728206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.728239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.728434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.728483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.728692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.728719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.728926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.728953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.729175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.729207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.729422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.729460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.729683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.729715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.729889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.729916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.730115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.730147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.730331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.730360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.730573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.730602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.730807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.730834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.731061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.731094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.731305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.731333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.731516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.731544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.731738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.731765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.731975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.732008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.732218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.732244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.732423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.732462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.732686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.732714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.732944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.732977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.733180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.733209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.733358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.733391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.733623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.733652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.733834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.733866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.734099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.734127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.325 [2024-07-26 11:37:26.734335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.325 [2024-07-26 11:37:26.734367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.325 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.734588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.734615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.734802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.734833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.735026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.735054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.735275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.735308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.735526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.735555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.735776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.735809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.735974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.736001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.736193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.736230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.736434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.736483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.736688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.736734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.736949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.736976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.737194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.737225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.737419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.737451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.737628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.737655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.737855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.737882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.738099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.738132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.738346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.738374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.738580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.738608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.738814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.738841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.739005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.739037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.739238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.739265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.739473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.739517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.739699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.739726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.739910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.739943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.740122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.740151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.740325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.740358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.740566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.740594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.740755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.740787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.741016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.741043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.741209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.741242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.741424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.741457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.741636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.741664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.741849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.741876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.742085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.742117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.742317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.742344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.742495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.742523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.742683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.326 [2024-07-26 11:37:26.742711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.326 qpair failed and we were unable to recover it. 00:29:31.326 [2024-07-26 11:37:26.742922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.742955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.743171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.743198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.743388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.743419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.743599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.743626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.743813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.743845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.744062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.744090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.744307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.744339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.744526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.744554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.744747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.744780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.744969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.744997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.745186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.745223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.745471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.745499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.745684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.745712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.745950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.745977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.746156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.746188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.746433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.746481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.746665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.746713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.746910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.746938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.747130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.747162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.747377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.747410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.747626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.747655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.747834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.747861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.748070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.748103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.748317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.748344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.748550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.748578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.748783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.748811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.749051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.749083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.749282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.749310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.749495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.749524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.749703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.749731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.749909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.749941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.750097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.750124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.750320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.750354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.750552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.750581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.750795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.750827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.751070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.751098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.751296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.751329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.327 [2024-07-26 11:37:26.751562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.327 [2024-07-26 11:37:26.751591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.327 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.751791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.751824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.752038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.752066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.752252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.752284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.752447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.752474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.752697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.752729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.752897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.752924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.753134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.753167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.753381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.753409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.753583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.753612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.753788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.753816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.753995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.754027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.754235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.754262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.754441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.754496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.754645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.754674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.754887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.754920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.755080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.755108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.755308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.755340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.755557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.755585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.755796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.755827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.755994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.756022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.756182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.756215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.756422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.756457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.756661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.756707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.756903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.756932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.757098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.757130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.757307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.757339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.757545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.757573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.757742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.757770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.757977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.758009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.758219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.758247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.758452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.758501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.758681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.758708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.758892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.758925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.759131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.328 [2024-07-26 11:37:26.759158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.328 qpair failed and we were unable to recover it. 00:29:31.328 [2024-07-26 11:37:26.759346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.759378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.759612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.759640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.759852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.759885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.760071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.760098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.760276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.760309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.760541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.760570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.760785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.760817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.761030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.761058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.761261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.761294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.761466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.761494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.761662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.761711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.761907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.761935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.762151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.762184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.762371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.762399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.762589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.762618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.762829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.762857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.763053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.763086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.763256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.763283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.763420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.763469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.763670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.763698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.763905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.763938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.764159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.764187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.764351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.764383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.764588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.764616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.764799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.764832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.765065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.765093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.765288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.765322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.765547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.765576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.765760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.765792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.765938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.765966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.766095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.766144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.766357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.766385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.766594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.766622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.766836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.766863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.767097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.767130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.329 [2024-07-26 11:37:26.767362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.329 [2024-07-26 11:37:26.767390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.329 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.767599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.767627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.767782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.767810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.767996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.768029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.768217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.768245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.768454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.768501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.768723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.768751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.768974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.769006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.769192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.769219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.769368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.769401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.769651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.769679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.769916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.769948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.770167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.770194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.770411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.770451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.770691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.770719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.770920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.770953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.771136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.771164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.771378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.771411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.771660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.771688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.771885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.771918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.772122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.772150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.772343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.772376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.772564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.772593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.772814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.772852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.773068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.773096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.773279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.773312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.773538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.773567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.773744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.773780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.774016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.774044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.774184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.774215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.774369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.774396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.774573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.774602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.774784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.774812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.774993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.775025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.775207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.775234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.775465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.775513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.775679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.775705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.775909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.330 [2024-07-26 11:37:26.775943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.330 qpair failed and we were unable to recover it. 00:29:31.330 [2024-07-26 11:37:26.776160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.776188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.776404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.776443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.776659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.776687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.776887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.776920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.777106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.777134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.777357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.777390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.777625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.777654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.777861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.777894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.778060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.778088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.778297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.778329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.778530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.778558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.778739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.778772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.778983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.779012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.779158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.779191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.779405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.779437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.779646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.779692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.779910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.779938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.780094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.780125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.780328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.780355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.780562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.780591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.780796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.780823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.781012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.781045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.781212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.781240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.781423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.781462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.781679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.781706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.781889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.781926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.782095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.782122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.782324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.782357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.782563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.782592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.782808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.782841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.783057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.783084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.783250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.783283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.783493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.783522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.783729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.783761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.783971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.783999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.784188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.784220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.784434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.784480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.331 [2024-07-26 11:37:26.784653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.331 [2024-07-26 11:37:26.784679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.331 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.784865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.784894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.785078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.785110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.785269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.785296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.785481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.785537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.785732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.785759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.785994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.786026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.786232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.786258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.786451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.786483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.786682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.786709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.786923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.786956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.787189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.787216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.787439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.787473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.787692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.787720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.787942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.787974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.788217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.788245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.788443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.788475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.788647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.788675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.788887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.788920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.789133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.789160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.789351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.789384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.789589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.789618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.789832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.789864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.790077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.790105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.790281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.790314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.790551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.790579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.790716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.790744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.790976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.791004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.791202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.791240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.791471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.791499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.791674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.791713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.791929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.791955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.792137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.792168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.792389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.792416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.792636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.792685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.792898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.792926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.793120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.793153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.793371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.793398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.332 qpair failed and we were unable to recover it. 00:29:31.332 [2024-07-26 11:37:26.793623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.332 [2024-07-26 11:37:26.793651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.793849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.793877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.794099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.794131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.794368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.794395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.794651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.794680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.794868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.794904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.795119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.795151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.795335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.795362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.795503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.795531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.795739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.795766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.795938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.796164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.796192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.796407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.796444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.796606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.796634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.796839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.796872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.797081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.797108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.797322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.797354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.797553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.797581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.797765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.797796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.797995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.798024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.798262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.798294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.798520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.798549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.798754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.798786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.799002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.799029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.799191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.799224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.799383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.799409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.799597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.799625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.799840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.799867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.800083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.800115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.800343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.800370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.800550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.800582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.800801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.800829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.333 [2024-07-26 11:37:26.801054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.333 [2024-07-26 11:37:26.801086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.333 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.801304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.801332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.801525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.801553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.801736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.801764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.801970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.802002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.802202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.802229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.802387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.802419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.802666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.802694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.802876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.802909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.803063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.803090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.803275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.803306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.803534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.803562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.803754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.803787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.804011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.804039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.804234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.804266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.804470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.804499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.804689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.804721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.804936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.804964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.805128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.805162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.805338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.805365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.805528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.805555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.805766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.805794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.806025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.806057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.806273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.806301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.806511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.806540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.806753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.806781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.806977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.807009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.807232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.807260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.807443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.807489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.807705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.807732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.807935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.807967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.808167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.808194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.808354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.808386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.808584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.808612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.808819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.808852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.809068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.809095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.809277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.809309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.809535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.334 [2024-07-26 11:37:26.809562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.334 qpair failed and we were unable to recover it. 00:29:31.334 [2024-07-26 11:37:26.809773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.809817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.810030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.810058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.810259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.810291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.810515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.810543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.810742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.810775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.810989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.811017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.811225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.811258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.811472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.811499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.811638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.811681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.811845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.811872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.812085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.812117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.812345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.812373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.812549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.812577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.812787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.812814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.813007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.813040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.813243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.813270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.813444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.813492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.813682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.813709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.813862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.813894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.814119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.814147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.814323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.814356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.814529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.814558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.814728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.814760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.814980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.815007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.815189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.815222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.815443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.815490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.815713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.815740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.815952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.815981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.816171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.816204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.816382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.816409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.816609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.816641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.816841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.816869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.817055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.817088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.817315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.817347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.817573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.817601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.817773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.817801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.817995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.818027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.818224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.335 [2024-07-26 11:37:26.818251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.335 qpair failed and we were unable to recover it. 00:29:31.335 [2024-07-26 11:37:26.818445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.818492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.818667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.818694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.818903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.818940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.819138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.819165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.819356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.819389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.819616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.819644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.819892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.819925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.820149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.820177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.820327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.820360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.820523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.820550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.820761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.820793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.820997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.821024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.821241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.821273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.821490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.821517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.821710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.821743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.821925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.821953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.822170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.822202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.822381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.822409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.822598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.822626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.822860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.822887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.823054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.823086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.823309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.823341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.823553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.823581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.823754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.823781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.823997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.824028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.824180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.824208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.824393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.824426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.824667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.824695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.824922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.824954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.825182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.825211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.825418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.825458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.825635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.825663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.825880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.825912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.826123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.826150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.826338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.826369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.826593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.826621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.826831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.826863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.336 [2024-07-26 11:37:26.827093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.336 [2024-07-26 11:37:26.827121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.336 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.827327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.827360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.827563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.827592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.827813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.827845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.828065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.828093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.828303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.828341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.828572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.828600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.828806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.828838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.829073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.829101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.829288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.829321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.829536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.829564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.829751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.829783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.829986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.830013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.830203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.830235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.830478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.830507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.830663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.830690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.830884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.830912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.831117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.831149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.831369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.831397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.831590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.831618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.831786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.831813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.832026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.832058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.832273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.832300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.832503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.832532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.832745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.832773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.832985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.833018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.833210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.833237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.833458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.833503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.833675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.833702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.833873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.833904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.834123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.834151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.834314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.834346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.834532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.834564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.834740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.834773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.834996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.835023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.835192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.835224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.835421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.835465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.835685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.835717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.337 qpair failed and we were unable to recover it. 00:29:31.337 [2024-07-26 11:37:26.835946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.337 [2024-07-26 11:37:26.835974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.836192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.836224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.836446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.836491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.836671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.836698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.836887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.836914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.837090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.837122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.837315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.837342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.837558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.837591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.837759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.837786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.837974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.838005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.838203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.838231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.838411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.838450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.838656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.838684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.838881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.838914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.839085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.839113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.839298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.839330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.839553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.839580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.839760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.839791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.840014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.840043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.840223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.840255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.840460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.840490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.840686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.840899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.840928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.841102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.841134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.841321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.841348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.841526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.841555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.841736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.841764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.841986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.842018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.842233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.842261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.842482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.842510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.842657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.842684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.842907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.338 [2024-07-26 11:37:26.842938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.338 qpair failed and we were unable to recover it. 00:29:31.338 [2024-07-26 11:37:26.843159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.843187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.843369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.843402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.843631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.843664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.843886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.843919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.844163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.844191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.844399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.844450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.844700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.844728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.844936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.844969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.845170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.845198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.845391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.845424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.845673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.845701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.845928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.845960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.846164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.846192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.846371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.846403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.846618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.846647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.846812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.846843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.847037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.847064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.847226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.847259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.847460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.847489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.847689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.847721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.847911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.847939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.848124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.848155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.848374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.848401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.848622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.848650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.848862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.848889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.849082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.849114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.849354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.849388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.849612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.849640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.849823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.849851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.850085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.850118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.850468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.850516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.850696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.850744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.850956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.850983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.851167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.851200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.851418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.851453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.851627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.851654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.851861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.851889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.339 [2024-07-26 11:37:26.852073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.339 [2024-07-26 11:37:26.852105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.339 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.852324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.852352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.852531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.852559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.852736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.852764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.852942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.852974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.853151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.853184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.853331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.853364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.853582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.853610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.853797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.853830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.853996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.854023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.854208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.854240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.854462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.854490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.854708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.854740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.854991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.855018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.855204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.855237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.855475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.855502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.855683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.855711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.855939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.855967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.856153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.856186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.856386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.856414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.856629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.856656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.856855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.856883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.857051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.857084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.340 [2024-07-26 11:37:26.857301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.857330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:31.340 [2024-07-26 11:37:26.857540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.857570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.340 [2024-07-26 11:37:26.857749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.857778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.340 [2024-07-26 11:37:26.857955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.857990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.858190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.858218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.858426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.858464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.858638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.858666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.858830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.858863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.859043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.859070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.859268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.859300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.859472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.859501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.859634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.859682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.859865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.340 [2024-07-26 11:37:26.859892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.340 qpair failed and we were unable to recover it. 00:29:31.340 [2024-07-26 11:37:26.860070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.860103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.860303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.860331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.860523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.860553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.860735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.860761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.860903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.860935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.861158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.861187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.861384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.861417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.861615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.861644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.861817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.861850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.862051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.862078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.862226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.862258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.862438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.862475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.862684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.862717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.862923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.862952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.863162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.863195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.863362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.863389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.863557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.863585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.863759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.863786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.863961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.863998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.864145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.864173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.864359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.864391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.864622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.864650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.864843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.864875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.865038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.865064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.865276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.865310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.865534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.865564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.865744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.865777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.865974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.866003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.866183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.866216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.866451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.866500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.866694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.866722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.866914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.866942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.867130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.867162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.867362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.867390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.867578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.867608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.867828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.867856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.868065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.868098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.341 [2024-07-26 11:37:26.868279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.341 [2024-07-26 11:37:26.868307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.341 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.868514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.868544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.868681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.868708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.868843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.868874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.869064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.869095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.869253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.869285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.869454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.869486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.869622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.869650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.869861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.869889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.870074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.870107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.870253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.870285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.870517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.870545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.870710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.870738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.870950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.870983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.871207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.871235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.871417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.871456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.871631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.871659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.871857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.871890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.872113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.872140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.872334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.872375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.872549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.872578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.872761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.872793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.873013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.873048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.873248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.873280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.873490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.873520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.873684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.873717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.873941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.873969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.874143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.874175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.874338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.874366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.874567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.874596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.874789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.874817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.875032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.875064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.342 [2024-07-26 11:37:26.875277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.342 [2024-07-26 11:37:26.875305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.342 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.875500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.875529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.875669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.875698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.875861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.875894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.876109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.876138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.876318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.876351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.876559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.876587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.876761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.876794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.876990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.877222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.877255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.877490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.877518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.877709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.877741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.877971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.877999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.878173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.878206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.878416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.878470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.878616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.878644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.878798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.878827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.878991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.879024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.879201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.879233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.879405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.879444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.879593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.879622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.879816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.879849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.880067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.880095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.880307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.880340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.880524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.880553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.880720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.880753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.343 [2024-07-26 11:37:26.880917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.880946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.343 [2024-07-26 11:37:26.881128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.881162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.343 [2024-07-26 11:37:26.881377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.343 [2024-07-26 11:37:26.881406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.881570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.881599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.881785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.881812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.882028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.882062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.882227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.882255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.882460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.882506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.882644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.882672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.882845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.882877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.343 qpair failed and we were unable to recover it. 00:29:31.343 [2024-07-26 11:37:26.883098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.343 [2024-07-26 11:37:26.883126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.883335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.883368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.883553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.883581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.883771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.883804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.883971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.883998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.884218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.884250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.884445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.884491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.884636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.884664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.884898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.884925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.885130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.885162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.885382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.885409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.885587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.885614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.885813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.885841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.886072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.886105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.886332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.886360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.886560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.886588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.886788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.886816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.887051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.887091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.887293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.887320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.887513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.887730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.887762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.887954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.887986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.888213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.888239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.888442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.888489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.888630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.888657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.888872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.888905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.889112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.889140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.889320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.889532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.889561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.889704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.889746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.889945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.889972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.890190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.890222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.890437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.890465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.890606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.890633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.890848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.890875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.891046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.891078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.891273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.891301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.891519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.344 [2024-07-26 11:37:26.891571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.344 qpair failed and we were unable to recover it. 00:29:31.344 [2024-07-26 11:37:26.891730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.891757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.891948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.891980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.892202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.892231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.892425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.892462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.892608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.892635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.892783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.892816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.893033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.893061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.893257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.893289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.893526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.893553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.893716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.893748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.893963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.893991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.894217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.894250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.894423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.894466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.894606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.894634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.894875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.894902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.895119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.895152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.895376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.895409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.895639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.895667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.895852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.895880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.896103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.896134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.896301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.896328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.896493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.896521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.896747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.896779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.897007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.897040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.897238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.897266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.897501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.897530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.897710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.897738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.897930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.897962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.898154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.898180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.898365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.898397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.898560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.898589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.898773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.898806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.899024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.899051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.899250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.899283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.899459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.899486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.899633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.899676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.899854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.899882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.345 [2024-07-26 11:37:26.900082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.345 [2024-07-26 11:37:26.900115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.345 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.900329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.900357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.900575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.900604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.900804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.900831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.901056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.901089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.901309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.901342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.901549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.901577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.901724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.901751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.901961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.901993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.902183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.902211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.902469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.902521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.902701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.902729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.902890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.902922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.903111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.903138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.903305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.903338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.903489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.903517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.903709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.903740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.903958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.903985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.904201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.904232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.904445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.904473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.904644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.904689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.904910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.904938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.905163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.905195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.905420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.905452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.905627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.905655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.905866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.905898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.906093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.906125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.906349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.906377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.906584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.906611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.906831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.906858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.907092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.907125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.907329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.907360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.907550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.907579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.907795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.907823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.908054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.908087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.908291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.908319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.908544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.908578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.908806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.346 [2024-07-26 11:37:26.908833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.346 qpair failed and we were unable to recover it. 00:29:31.346 [2024-07-26 11:37:26.909068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.909100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.909305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.909333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.909546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.909574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.909779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.909806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.910031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.910062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.910205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.910232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.910420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.910467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.910703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.910732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.910976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.911008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.911241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.911268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.911473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 Malloc0 00:29:31.347 [2024-07-26 11:37:26.911517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.911697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.911724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.911920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.911952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.347 [2024-07-26 11:37:26.912113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.912140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.347 [2024-07-26 11:37:26.912332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.912363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.347 [2024-07-26 11:37:26.912580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.912607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.347 [2024-07-26 11:37:26.912816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.912848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.913062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.913090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.913270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.913303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.913499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.913527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.913732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.913765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.913952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.913985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.914190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.914223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.914443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.914492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.914674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.914702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.914883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.914914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.915122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.347 [2024-07-26 11:37:26.915141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.915167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.915343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.915374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.915580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.915609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.347 qpair failed and we were unable to recover it. 00:29:31.347 [2024-07-26 11:37:26.915822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.347 [2024-07-26 11:37:26.915854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.916061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.916089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.916277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.916310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.916535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.916564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.916743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.916775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.916974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.917002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.917179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.917211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.917412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.917446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.917657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.917702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.917889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.917916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.918148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.918179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.918347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.918375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.918555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.918583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.918797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.918824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.919091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.919124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.919342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.919374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.919592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.919620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.919794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.919822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.919988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.920021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.920241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.920268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.920458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.920504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.920681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.920708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.920890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.920923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.921113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.921141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.921356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.921388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.921620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.921649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.921870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.921903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.922117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.922144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.922361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.922393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.922588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.922617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.922805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.922837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.923045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.923072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 [2024-07-26 11:37:26.923264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.923295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.348 [2024-07-26 11:37:26.923484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.923513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.348 [2024-07-26 11:37:26.923719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.923752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.348 [2024-07-26 11:37:26.923971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.348 [2024-07-26 11:37:26.923999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.348 qpair failed and we were unable to recover it. 00:29:31.348 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.348 [2024-07-26 11:37:26.924214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.924247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.924440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.924469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.924695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.924728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.924911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.924944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.925174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.925207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.925345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.925377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.925593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.925621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.925811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.925839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.926063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.926317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.926350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.926571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.926600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.926776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.926804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.927008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.927040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.927263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.927290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.927499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.927527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.927720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.927747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.927940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.927971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.928189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.928217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.928402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.928440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.928682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.928710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.928949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.928981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.929213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.929455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.929501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.929707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.929736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.929939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.929972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.930139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.930167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.930380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.930412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.930634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.930662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.930852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.930883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.931098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.931125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.931312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.931343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.349 [2024-07-26 11:37:26.931534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.931561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.931740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.349 [2024-07-26 11:37:26.931773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.349 [2024-07-26 11:37:26.931969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.931997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.349 [2024-07-26 11:37:26.932191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.932234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.932438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.349 [2024-07-26 11:37:26.932485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.349 qpair failed and we were unable to recover it. 00:29:31.349 [2024-07-26 11:37:26.932706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.932734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.932940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.932968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.933179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.933212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.933426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.933466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.933711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.933738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.933941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.933973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.934197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.934225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.934444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.934497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.934750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.934795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.934985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.935017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.935223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.935250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.935448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.935495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.935659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.935687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.935898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.935931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.936115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.936143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.936346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.936379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.936580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.936608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.936779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.936811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.937001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.937030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.937215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.937247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.937465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.937493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.937700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.937732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.937996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.938023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.938244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.938276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.938495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.938523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.938705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.938738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.938924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.938951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.939166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.939204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.939365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.939398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.350 [2024-07-26 11:37:26.939639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.939667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.350 [2024-07-26 11:37:26.939843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.939871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.350 [2024-07-26 11:37:26.940079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.940112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.350 [2024-07-26 11:37:26.940296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.940324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.940518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.940546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.940728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.350 [2024-07-26 11:37:26.940756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.350 qpair failed and we were unable to recover it. 00:29:31.350 [2024-07-26 11:37:26.940991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.941024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.941211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.941244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.941440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.941474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.941697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.941744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.941979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.942006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.942262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.942295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.942493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.942521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.942732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.942983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.943011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.943197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.351 [2024-07-26 11:37:26.943230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0cbc000b90 with addr=10.0.0.2, port=4420 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 [2024-07-26 11:37:26.943467] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.351 [2024-07-26 11:37:26.946012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.351 [2024-07-26 11:37:26.946198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.351 [2024-07-26 11:37:26.946233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.351 [2024-07-26 11:37:26.946252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.351 [2024-07-26 11:37:26.946269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.351 [2024-07-26 11:37:26.946317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.351 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.351 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.351 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.351 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.351 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.351 [2024-07-26 11:37:26.955853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.351 11:37:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2235654 00:29:31.351 [2024-07-26 11:37:26.956029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.351 [2024-07-26 11:37:26.956074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.351 [2024-07-26 11:37:26.956096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.351 [2024-07-26 11:37:26.956112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.351 [2024-07-26 11:37:26.956151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.351 qpair failed and we were unable to recover it. 00:29:31.611 [2024-07-26 11:37:26.965867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.611 [2024-07-26 11:37:26.966029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.611 [2024-07-26 11:37:26.966065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.611 [2024-07-26 11:37:26.966083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.611 [2024-07-26 11:37:26.966100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.611 [2024-07-26 11:37:26.966139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-07-26 11:37:26.975819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.611 [2024-07-26 11:37:26.975984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.611 [2024-07-26 11:37:26.976019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.611 [2024-07-26 11:37:26.976039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.611 [2024-07-26 11:37:26.976056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.611 [2024-07-26 11:37:26.976094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-07-26 11:37:26.985856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.611 [2024-07-26 11:37:26.986022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.611 [2024-07-26 11:37:26.986056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.611 [2024-07-26 11:37:26.986075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.611 [2024-07-26 11:37:26.986092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.611 [2024-07-26 11:37:26.986131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.611 qpair failed and we were unable to recover it. 00:29:31.611 [2024-07-26 11:37:26.995926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.611 [2024-07-26 11:37:26.996106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.611 [2024-07-26 11:37:26.996141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:26.996160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:26.996177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:26.996222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.005969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.006137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.006172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.006190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.006208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.006246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.015913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.016081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.016117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.016136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.016154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.016193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.025896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.026054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.026090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.026110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.026127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.026167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.035978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.036127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.036162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.036182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.036198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.036236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.045971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.046135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.046176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.046196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.046213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.046251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.056029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.056190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.056224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.056243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.056260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.056299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.066079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.066241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.066276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.066295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.066312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.066351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.076160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.076327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.076362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.076381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.076398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.076443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.612 [2024-07-26 11:37:27.086135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.612 [2024-07-26 11:37:27.086298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.612 [2024-07-26 11:37:27.086332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.612 [2024-07-26 11:37:27.086350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.612 [2024-07-26 11:37:27.086374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.612 [2024-07-26 11:37:27.086414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.612 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.096110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.096272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.096306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.096325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.096342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.096379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.106174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.106338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.106372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.106391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.106408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.106471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.116204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.116359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.116394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.116413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.116440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.116495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.126266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.126447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.126493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.126509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.126524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.126558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.136269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.136447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.136496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.136512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.136527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.136561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.146370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.146522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.146551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.146567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.146582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.146615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.156346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.156542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.156571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.156588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.156602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.156636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.166389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.166554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.166584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.166600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.166614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.166647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.176369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.176564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.613 [2024-07-26 11:37:27.176594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.613 [2024-07-26 11:37:27.176616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.613 [2024-07-26 11:37:27.176632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.613 [2024-07-26 11:37:27.176667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.613 qpair failed and we were unable to recover it. 00:29:31.613 [2024-07-26 11:37:27.186435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.613 [2024-07-26 11:37:27.186592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.186621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.186638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.186652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.186687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.196506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.196689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.196723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.196742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.196759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.196797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.206510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.206658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.206688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.206704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.206718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.206768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.216505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.216648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.216677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.216694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.216708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.216760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.226580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.226734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.226768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.226787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.226804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.226844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.236583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.236732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.236767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.236785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.236802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.236841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.246681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.246834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.246868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.246886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.246903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.246942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.256636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.256805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.256839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.256858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.256874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.256913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.614 [2024-07-26 11:37:27.266649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.614 [2024-07-26 11:37:27.266816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.614 [2024-07-26 11:37:27.266850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.614 [2024-07-26 11:37:27.266876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.614 [2024-07-26 11:37:27.266894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.614 [2024-07-26 11:37:27.266933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.614 qpair failed and we were unable to recover it. 00:29:31.874 [2024-07-26 11:37:27.276682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.874 [2024-07-26 11:37:27.276860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.874 [2024-07-26 11:37:27.276894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.874 [2024-07-26 11:37:27.276914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.874 [2024-07-26 11:37:27.276929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.874 [2024-07-26 11:37:27.276968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.874 qpair failed and we were unable to recover it. 00:29:31.874 [2024-07-26 11:37:27.286715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.874 [2024-07-26 11:37:27.286881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.874 [2024-07-26 11:37:27.286916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.874 [2024-07-26 11:37:27.286935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.874 [2024-07-26 11:37:27.286951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.874 [2024-07-26 11:37:27.286989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.874 qpair failed and we were unable to recover it. 00:29:31.874 [2024-07-26 11:37:27.296775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.874 [2024-07-26 11:37:27.296932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.874 [2024-07-26 11:37:27.296967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.874 [2024-07-26 11:37:27.296985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.874 [2024-07-26 11:37:27.297002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.874 [2024-07-26 11:37:27.297039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.874 qpair failed and we were unable to recover it. 00:29:31.874 [2024-07-26 11:37:27.306785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.874 [2024-07-26 11:37:27.306939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.874 [2024-07-26 11:37:27.306973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.874 [2024-07-26 11:37:27.306993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.874 [2024-07-26 11:37:27.307011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.874 [2024-07-26 11:37:27.307051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.874 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.316847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.316999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.317033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.317052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.317069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.317108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.326860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.327015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.327049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.327068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.327085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.327125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.336882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.337043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.337078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.337097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.337114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.337154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.346871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.347014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.347056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.347074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.347091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.347129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.356927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.357079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.357119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.357138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.357155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.357193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.366953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.367105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.367139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.367158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.367175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.367213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.377028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.377193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.377227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.377246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.377263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.377301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.387026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.387204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.387238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.387258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.387275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.387315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.397084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.397239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.397273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.397292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.397309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.397355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.407108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.407265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.407299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.407318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.407335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.407372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.417195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.417422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.417479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.417496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.417511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.417545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.427177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.427334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.427368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.427387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.427403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.427452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.437220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.875 [2024-07-26 11:37:27.437371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.875 [2024-07-26 11:37:27.437404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.875 [2024-07-26 11:37:27.437423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.875 [2024-07-26 11:37:27.437451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.875 [2024-07-26 11:37:27.437504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.875 qpair failed and we were unable to recover it. 00:29:31.875 [2024-07-26 11:37:27.447273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.447469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.447520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.447537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.447551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.447586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.457322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.457507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.457534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.457550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.457563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.457595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.467287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.467452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.467502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.467518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.467532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.467565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.477299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.477475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.477519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.477534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.477549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.477582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.487360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.487526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.487555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.487570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.487591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.487626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.497408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.497583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.497611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.497627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.497641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.497693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.507510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.507652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.507684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.507719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.507735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.507775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.517416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.517581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.517609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.517625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.517639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.517672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:31.876 [2024-07-26 11:37:27.527455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.876 [2024-07-26 11:37:27.527607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.876 [2024-07-26 11:37:27.527635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.876 [2024-07-26 11:37:27.527651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.876 [2024-07-26 11:37:27.527665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:31.876 [2024-07-26 11:37:27.527714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.876 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.537522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.537681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.537715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.537733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.537750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.537789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.547515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.547682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.547715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.547734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.547751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.547790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.557557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.557706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.557739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.557758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.557774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.557814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.567636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.567796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.567829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.567847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.567865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.567905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.577662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.577842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.577875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.577893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.577917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.577956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.587627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.587789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.587822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.587841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.587857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.587897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.597794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.597969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.598002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.598021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.598037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.598075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.607701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.607862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.607895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.607914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.607931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.607969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.617782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.617965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.617998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.618016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.618032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.618072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.627759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.627919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.627952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.627971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.627987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.628025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.637803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.637961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.637993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.638012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.638029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.136 [2024-07-26 11:37:27.638069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.136 qpair failed and we were unable to recover it. 00:29:32.136 [2024-07-26 11:37:27.647853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.136 [2024-07-26 11:37:27.648044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.136 [2024-07-26 11:37:27.648077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.136 [2024-07-26 11:37:27.648096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.136 [2024-07-26 11:37:27.648113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.648151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.657888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.658046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.658079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.658097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.658113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.658151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.667921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.668094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.668127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.668153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.668171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.668210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.677906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.678060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.678093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.678111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.678128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.678167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.687925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.688115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.688149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.688167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.688184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.688224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.697968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.698131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.698164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.698183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.698200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.698240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.707974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.708129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.708163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.708181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.708198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.708236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.718233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.718399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.718439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.718475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.718490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.718535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.728101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.728266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.728299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.728318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.728335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.728372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.738151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.738313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.738346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.738365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.738382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.738420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.748177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.748331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.748365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.748383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.748400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.748452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.758156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.758318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.758361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.758381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.758398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.758445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.768198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.768383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.768417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.768448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.768466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.768517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.137 [2024-07-26 11:37:27.778218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.137 [2024-07-26 11:37:27.778386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.137 [2024-07-26 11:37:27.778419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.137 [2024-07-26 11:37:27.778451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.137 [2024-07-26 11:37:27.778484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.137 [2024-07-26 11:37:27.778519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.137 qpair failed and we were unable to recover it. 00:29:32.138 [2024-07-26 11:37:27.788215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.138 [2024-07-26 11:37:27.788372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.138 [2024-07-26 11:37:27.788406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.138 [2024-07-26 11:37:27.788425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.138 [2024-07-26 11:37:27.788452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.138 [2024-07-26 11:37:27.788508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.138 qpair failed and we were unable to recover it. 00:29:32.397 [2024-07-26 11:37:27.798240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.397 [2024-07-26 11:37:27.798403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.397 [2024-07-26 11:37:27.798444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.397 [2024-07-26 11:37:27.798480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.397 [2024-07-26 11:37:27.798495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.397 [2024-07-26 11:37:27.798535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.397 qpair failed and we were unable to recover it. 00:29:32.397 [2024-07-26 11:37:27.808346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.397 [2024-07-26 11:37:27.808528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.397 [2024-07-26 11:37:27.808557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.397 [2024-07-26 11:37:27.808572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.397 [2024-07-26 11:37:27.808587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.397 [2024-07-26 11:37:27.808619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.397 qpair failed and we were unable to recover it. 00:29:32.397 [2024-07-26 11:37:27.818327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.397 [2024-07-26 11:37:27.818529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.397 [2024-07-26 11:37:27.818558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.397 [2024-07-26 11:37:27.818574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.397 [2024-07-26 11:37:27.818588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.397 [2024-07-26 11:37:27.818622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.397 qpair failed and we were unable to recover it. 00:29:32.397 [2024-07-26 11:37:27.828308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.397 [2024-07-26 11:37:27.828486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.397 [2024-07-26 11:37:27.828515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.397 [2024-07-26 11:37:27.828531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.397 [2024-07-26 11:37:27.828546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.397 [2024-07-26 11:37:27.828578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.397 qpair failed and we were unable to recover it. 00:29:32.397 [2024-07-26 11:37:27.838381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.397 [2024-07-26 11:37:27.838542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.397 [2024-07-26 11:37:27.838571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.397 [2024-07-26 11:37:27.838587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.397 [2024-07-26 11:37:27.838602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.397 [2024-07-26 11:37:27.838635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.397 qpair failed and we were unable to recover it. 00:29:32.397 [2024-07-26 11:37:27.848374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.397 [2024-07-26 11:37:27.848531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.397 [2024-07-26 11:37:27.848566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.397 [2024-07-26 11:37:27.848582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.848597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.848630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.858516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.858652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.858681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.858696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.858727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.858767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.868489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.868633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.868665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.868699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.868715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.868757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.878505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.878652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.878697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.878715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.878732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.878771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.888501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.888636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.888664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.888680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.888714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.888756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.898539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.898706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.898740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.898759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.898775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.898814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.908648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.908824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.908858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.908876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.908893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.908933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.918650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.918829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.918862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.918880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.918896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.918936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.928630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.928783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.928816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.928835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.928852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.928890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.938783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.938950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.938983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.939001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.939017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.939059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.948656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.948829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.948863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.948881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.948898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.948937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.958750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.958900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.958933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.958952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.958969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.959007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.968750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.968913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.968946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.968964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.398 [2024-07-26 11:37:27.968981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.398 [2024-07-26 11:37:27.969019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.398 qpair failed and we were unable to recover it. 00:29:32.398 [2024-07-26 11:37:27.978815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.398 [2024-07-26 11:37:27.978980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.398 [2024-07-26 11:37:27.979013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.398 [2024-07-26 11:37:27.979032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:27.979055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:27.979097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:27.988793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:27.988956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:27.988990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:27.989008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:27.989025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:27.989063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:27.998794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:27.998956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:27.998990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:27.999008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:27.999025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:27.999066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:28.008831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:28.008986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:28.009020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:28.009039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:28.009056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:28.009095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:28.018848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:28.019008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:28.019041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:28.019059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:28.019076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:28.019114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:28.028983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:28.029140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:28.029175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:28.029193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:28.029210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:28.029247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:28.039004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:28.039157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:28.039191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:28.039209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:28.039227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:28.039266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.399 [2024-07-26 11:37:28.048987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.399 [2024-07-26 11:37:28.049137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.399 [2024-07-26 11:37:28.049171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.399 [2024-07-26 11:37:28.049189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.399 [2024-07-26 11:37:28.049206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.399 [2024-07-26 11:37:28.049245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.399 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.059075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.059240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.059273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.059292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.059308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.059349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.069008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.069177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.069210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.069237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.069255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.069293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.079063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.079228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.079261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.079279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.079296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.079334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.089129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.089293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.089326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.089345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.089361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.089401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.099140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.099306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.099338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.099356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.099373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.099412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.109156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.109312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.109345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.109363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.109381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.109418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.119197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.119381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.119414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.119448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.119482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.119517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.129220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.129367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.129400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.129419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.129445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.129497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.139260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.139423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.139488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.139506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.139520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.139554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.149233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.149399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.149441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.149477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.149492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.149525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.159352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.159520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.159560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.159577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.159591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.159625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.169330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.169495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.659 [2024-07-26 11:37:28.169523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.659 [2024-07-26 11:37:28.169551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.659 [2024-07-26 11:37:28.169565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.659 [2024-07-26 11:37:28.169599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.659 qpair failed and we were unable to recover it. 00:29:32.659 [2024-07-26 11:37:28.179372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.659 [2024-07-26 11:37:28.179552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.179581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.179596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.179611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.179643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.189417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.189580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.189608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.189624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.189638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.189690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.199401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.199582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.199611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.199627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.199641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.199681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.209484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.209614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.209643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.209659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.209674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.209724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.219498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.219632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.219659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.219694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.219711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.219750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.229511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.229656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.229704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.229723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.229739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.229780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.239636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.239786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.239819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.239837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.239855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.239895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.249574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.249711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.249761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.249782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.249799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.249841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.259588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.259751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.259785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.259804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.259821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.259859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.269615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.269786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.269819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.269838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.269855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.269895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.279603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.279790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.279824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.279843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.279860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.279900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.289636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.289817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.289850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.289869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.289885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.289932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.299727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.299875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.299908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.660 [2024-07-26 11:37:28.299927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.660 [2024-07-26 11:37:28.299944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.660 [2024-07-26 11:37:28.299982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.660 qpair failed and we were unable to recover it. 00:29:32.660 [2024-07-26 11:37:28.309757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.660 [2024-07-26 11:37:28.309923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.660 [2024-07-26 11:37:28.309955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.661 [2024-07-26 11:37:28.309973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.661 [2024-07-26 11:37:28.309990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.661 [2024-07-26 11:37:28.310029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.661 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.319844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.320013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.920 [2024-07-26 11:37:28.320047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.920 [2024-07-26 11:37:28.320065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.920 [2024-07-26 11:37:28.320082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.920 [2024-07-26 11:37:28.320122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.920 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.329755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.329930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.920 [2024-07-26 11:37:28.329963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.920 [2024-07-26 11:37:28.329982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.920 [2024-07-26 11:37:28.329998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.920 [2024-07-26 11:37:28.330039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.920 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.339880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.340050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.920 [2024-07-26 11:37:28.340084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.920 [2024-07-26 11:37:28.340102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.920 [2024-07-26 11:37:28.340119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.920 [2024-07-26 11:37:28.340157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.920 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.349858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.350011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.920 [2024-07-26 11:37:28.350044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.920 [2024-07-26 11:37:28.350062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.920 [2024-07-26 11:37:28.350078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.920 [2024-07-26 11:37:28.350118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.920 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.359904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.360067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.920 [2024-07-26 11:37:28.360100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.920 [2024-07-26 11:37:28.360119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.920 [2024-07-26 11:37:28.360136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.920 [2024-07-26 11:37:28.360175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.920 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.369897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.370049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.920 [2024-07-26 11:37:28.370082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.920 [2024-07-26 11:37:28.370101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.920 [2024-07-26 11:37:28.370118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.920 [2024-07-26 11:37:28.370156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.920 qpair failed and we were unable to recover it. 00:29:32.920 [2024-07-26 11:37:28.379922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.920 [2024-07-26 11:37:28.380079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.380113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.380131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.380161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.380202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.389961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.390116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.390155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.390173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.390190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.390228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.399978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.400146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.400180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.400199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.400215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.400253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.410126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.410304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.410337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.410355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.410371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.410409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.420079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.420235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.420269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.420287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.420303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.420340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.430105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.430266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.430299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.430318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.430335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.430373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.440114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.440265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.440298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.440316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.440333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.440374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.450127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.450279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.450312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.450331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.450348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.450388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.460174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.460348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.460380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.460399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.460414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.460485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.470253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.470410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.470471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.470499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.470515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.470551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.480219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.480374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.480407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.480426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.480472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.480507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.490230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.490389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.490423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.490467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.490483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.490517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.500276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.500460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.500490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.921 [2024-07-26 11:37:28.500506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.921 [2024-07-26 11:37:28.500520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.921 [2024-07-26 11:37:28.500554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.921 qpair failed and we were unable to recover it. 00:29:32.921 [2024-07-26 11:37:28.510340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.921 [2024-07-26 11:37:28.510514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.921 [2024-07-26 11:37:28.510543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.510559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.510574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.510607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:32.922 [2024-07-26 11:37:28.520316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.922 [2024-07-26 11:37:28.520516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.922 [2024-07-26 11:37:28.520545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.520561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.520575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.520608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:32.922 [2024-07-26 11:37:28.530345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.922 [2024-07-26 11:37:28.530521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.922 [2024-07-26 11:37:28.530550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.530566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.530581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.530615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:32.922 [2024-07-26 11:37:28.540392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.922 [2024-07-26 11:37:28.540560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.922 [2024-07-26 11:37:28.540589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.540606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.540620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.540655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:32.922 [2024-07-26 11:37:28.550448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.922 [2024-07-26 11:37:28.550624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.922 [2024-07-26 11:37:28.550652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.550668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.550683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.550716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:32.922 [2024-07-26 11:37:28.560446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.922 [2024-07-26 11:37:28.560601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.922 [2024-07-26 11:37:28.560629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.560652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.560686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.560725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:32.922 [2024-07-26 11:37:28.570480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.922 [2024-07-26 11:37:28.570632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.922 [2024-07-26 11:37:28.570660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.922 [2024-07-26 11:37:28.570676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.922 [2024-07-26 11:37:28.570706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:32.922 [2024-07-26 11:37:28.570745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.922 qpair failed and we were unable to recover it. 00:29:33.181 [2024-07-26 11:37:28.580587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.181 [2024-07-26 11:37:28.580729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.181 [2024-07-26 11:37:28.580778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.181 [2024-07-26 11:37:28.580796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.181 [2024-07-26 11:37:28.580812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.181 [2024-07-26 11:37:28.580851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.181 qpair failed and we were unable to recover it. 00:29:33.181 [2024-07-26 11:37:28.590601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.181 [2024-07-26 11:37:28.590734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.181 [2024-07-26 11:37:28.590780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.181 [2024-07-26 11:37:28.590799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.181 [2024-07-26 11:37:28.590816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.181 [2024-07-26 11:37:28.590854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.181 qpair failed and we were unable to recover it. 00:29:33.181 [2024-07-26 11:37:28.600559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.181 [2024-07-26 11:37:28.600711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.181 [2024-07-26 11:37:28.600744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.181 [2024-07-26 11:37:28.600762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.181 [2024-07-26 11:37:28.600779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.181 [2024-07-26 11:37:28.600818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.181 qpair failed and we were unable to recover it. 00:29:33.181 [2024-07-26 11:37:28.610615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.181 [2024-07-26 11:37:28.610757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.181 [2024-07-26 11:37:28.610805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.181 [2024-07-26 11:37:28.610824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.181 [2024-07-26 11:37:28.610841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.181 [2024-07-26 11:37:28.610880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.181 qpair failed and we were unable to recover it. 00:29:33.181 [2024-07-26 11:37:28.620637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.181 [2024-07-26 11:37:28.620831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.181 [2024-07-26 11:37:28.620864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.620883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.620899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.620938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.630814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.631006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.631038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.631056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.631073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.631113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.640685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.640841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.640874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.640892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.640909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.640947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.650756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.650912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.650952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.650971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.650988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.651027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.660766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.660930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.660963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.660982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.660998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.661035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.670829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.671028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.671060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.671078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.671095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.671133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.680794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.680946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.680979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.680997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.681014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.681053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.690806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.690957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.690990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.691008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.691024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.691070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.700818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.700980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.701013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.701032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.701048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.701085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.710916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.711076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.711109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.711127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.711144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.711183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.720889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.721051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.721085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.721103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.721120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.721158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.730921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.731076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.731109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.731127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.731144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.731182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.741045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.741208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.741248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.741267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.182 [2024-07-26 11:37:28.741284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.182 [2024-07-26 11:37:28.741322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.182 qpair failed and we were unable to recover it. 00:29:33.182 [2024-07-26 11:37:28.750996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.182 [2024-07-26 11:37:28.751150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.182 [2024-07-26 11:37:28.751182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.182 [2024-07-26 11:37:28.751201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.751217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.751255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.761035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.761190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.761224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.761242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.761259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.761297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.771086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.771239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.771272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.771290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.771307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.771345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.781189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.781372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.781406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.781424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.781457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.781510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.791134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.791297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.791330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.791349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.791365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.791404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.801125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.801272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.801306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.801325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.801342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.801379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.811182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.811344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.811377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.811395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.811412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.811477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.821184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.821341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.821374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.821392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.821409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.821458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.831215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.831374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.831407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.831426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.831452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.831504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.183 [2024-07-26 11:37:28.841280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.183 [2024-07-26 11:37:28.841442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.183 [2024-07-26 11:37:28.841490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.183 [2024-07-26 11:37:28.841506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.183 [2024-07-26 11:37:28.841520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.183 [2024-07-26 11:37:28.841555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.183 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.851311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.851488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.851516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.851533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.851547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.851580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.861352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.861545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.861574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.861590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.861605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.861637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.871361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.871522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.871556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.871578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.871594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.871626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.881352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.881523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.881551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.881567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.881581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.881615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.891406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.891581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.891610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.891625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.891640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.891674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.901478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.901617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.901645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.901661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.901675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.901725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.911490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.911668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.911697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.911730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.911747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.911786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.921508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.921642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.921688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.921708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.921724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.921763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.931581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.931745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.931778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.931797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.443 [2024-07-26 11:37:28.931813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.443 [2024-07-26 11:37:28.931853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.443 qpair failed and we were unable to recover it. 00:29:33.443 [2024-07-26 11:37:28.941591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.443 [2024-07-26 11:37:28.941754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.443 [2024-07-26 11:37:28.941787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.443 [2024-07-26 11:37:28.941805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:28.941822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:28.941860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:28.951614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:28.951796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:28.951829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:28.951847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:28.951864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:28.951903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:28.961637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:28.961781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:28.961814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:28.961839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:28.961857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:28.961898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:28.971762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:28.971917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:28.971951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:28.971969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:28.971986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:28.972026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:28.981730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:28.981906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:28.981939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:28.981958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:28.981975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:28.982013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:28.991757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:28.991921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:28.991954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:28.991972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:28.991990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:28.992029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.001817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.001973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.002008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.002027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.002044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.002083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.011783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.011942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.011975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.011994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.012011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.012049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.021805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.021968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.022002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.022020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.022037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.022074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.031843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.031998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.032031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.032050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.032066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.032105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.041897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.042084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.042118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.042136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.042152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.042191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.051924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.052133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.052173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.052192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.052208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.052246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.061936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.062094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.062128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.062147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.062164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.444 [2024-07-26 11:37:29.062202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.444 qpair failed and we were unable to recover it. 00:29:33.444 [2024-07-26 11:37:29.071941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.444 [2024-07-26 11:37:29.072091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.444 [2024-07-26 11:37:29.072125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.444 [2024-07-26 11:37:29.072144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.444 [2024-07-26 11:37:29.072161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.445 [2024-07-26 11:37:29.072201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.445 qpair failed and we were unable to recover it. 00:29:33.445 [2024-07-26 11:37:29.082093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.445 [2024-07-26 11:37:29.082276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.445 [2024-07-26 11:37:29.082311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.445 [2024-07-26 11:37:29.082329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.445 [2024-07-26 11:37:29.082346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.445 [2024-07-26 11:37:29.082385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.445 qpair failed and we were unable to recover it. 00:29:33.445 [2024-07-26 11:37:29.092057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.445 [2024-07-26 11:37:29.092215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.445 [2024-07-26 11:37:29.092256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.445 [2024-07-26 11:37:29.092276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.445 [2024-07-26 11:37:29.092293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.445 [2024-07-26 11:37:29.092339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.445 qpair failed and we were unable to recover it. 00:29:33.445 [2024-07-26 11:37:29.102144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.445 [2024-07-26 11:37:29.102305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.445 [2024-07-26 11:37:29.102339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.445 [2024-07-26 11:37:29.102358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.445 [2024-07-26 11:37:29.102374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.445 [2024-07-26 11:37:29.102414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.445 qpair failed and we were unable to recover it. 00:29:33.704 [2024-07-26 11:37:29.112080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.704 [2024-07-26 11:37:29.112248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.704 [2024-07-26 11:37:29.112281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.704 [2024-07-26 11:37:29.112299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.704 [2024-07-26 11:37:29.112316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.704 [2024-07-26 11:37:29.112355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.704 qpair failed and we were unable to recover it. 00:29:33.704 [2024-07-26 11:37:29.122193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.704 [2024-07-26 11:37:29.122349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.704 [2024-07-26 11:37:29.122383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.704 [2024-07-26 11:37:29.122401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.704 [2024-07-26 11:37:29.122418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.704 [2024-07-26 11:37:29.122479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.704 qpair failed and we were unable to recover it. 00:29:33.704 [2024-07-26 11:37:29.132197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.132356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.132389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.132407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.132424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.132488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.142186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.142399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.142447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.142485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.142500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.142534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.152175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.152330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.152364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.152382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.152400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.152447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.162226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.162380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.162414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.162442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.162475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.162508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.172256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.172444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.172491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.172507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.172521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.172553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.182331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.182547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.182576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.182592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.182615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.182651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.192351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.192526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.192555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.192571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.192585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.192618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.202355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.202523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.202552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.202568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.202582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.202614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.212474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.212608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.212637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.212653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.212667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.212700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.222479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.222662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.222711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.222730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.222746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.222785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.232517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.232730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.232763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.232782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.232798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.232839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.242566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.242750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.242784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.242802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.242819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.242857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.252528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.252690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.252719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.705 [2024-07-26 11:37:29.252734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.705 [2024-07-26 11:37:29.252748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.705 [2024-07-26 11:37:29.252801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.705 qpair failed and we were unable to recover it. 00:29:33.705 [2024-07-26 11:37:29.262587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.705 [2024-07-26 11:37:29.262745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.705 [2024-07-26 11:37:29.262779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.262797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.262814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.262853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.272616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.272769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.272803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.272821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.272845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.272884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.282624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.282794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.282828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.282847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.282864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.282902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.292664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.292840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.292873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.292892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.292909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.292947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.302771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.303011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.303043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.303062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.303079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.303116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.312750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.312906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.312939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.312958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.312975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.313013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.322765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.322956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.322989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.323008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.323025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.323063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.332834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.332989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.333021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.333041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.333060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.333101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.342822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.343011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.343044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.343062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.343079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.343117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.352830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.352994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.353028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.353047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.353064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.353102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.706 [2024-07-26 11:37:29.362891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.706 [2024-07-26 11:37:29.363051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.706 [2024-07-26 11:37:29.363084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.706 [2024-07-26 11:37:29.363111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.706 [2024-07-26 11:37:29.363129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.706 [2024-07-26 11:37:29.363167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.706 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.372924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.373126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.373159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.373178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.373195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.373234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.383011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.383212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.383245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.383263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.383280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.383318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.392940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.393094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.393128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.393147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.393164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.393202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.403043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.403196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.403230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.403248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.403266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.403305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.412990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.413157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.413190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.413208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.413224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.413262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.423042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.423204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.423237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.423255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.423272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.423309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.433067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.433238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.433272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.433290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.433307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.433346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.443082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.443235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.443269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.443287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.443304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.443342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.453109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.453262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.453302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.453322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.966 [2024-07-26 11:37:29.453339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.966 [2024-07-26 11:37:29.453376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.966 qpair failed and we were unable to recover it. 00:29:33.966 [2024-07-26 11:37:29.463176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.966 [2024-07-26 11:37:29.463336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.966 [2024-07-26 11:37:29.463368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.966 [2024-07-26 11:37:29.463386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.463401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.463449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.473183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.473349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.473383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.473401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.473418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.473482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.483226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.483374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.483408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.483426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.483470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.483505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.493300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.493510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.493540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.493556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.493570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.493610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.503322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.503532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.503560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.503576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.503590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.503623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.513435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.513596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.513624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.513640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.513654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.513699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.523423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.523585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.523614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.523630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.523644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.523677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.533341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.533511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.533541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.533557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.533571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.533605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.543442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.543598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.543632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.543649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.543663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.543719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.553511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.553662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.553690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.553706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.553720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.553775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.563494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.563642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.563687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.563706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.563723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.563762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.573508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.573645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.573674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.573697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.573728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.573769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.583571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.583739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.583772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.583791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.583814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.583858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.967 qpair failed and we were unable to recover it. 00:29:33.967 [2024-07-26 11:37:29.593543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.967 [2024-07-26 11:37:29.593679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.967 [2024-07-26 11:37:29.593707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.967 [2024-07-26 11:37:29.593723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.967 [2024-07-26 11:37:29.593737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.967 [2024-07-26 11:37:29.593790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.968 qpair failed and we were unable to recover it. 00:29:33.968 [2024-07-26 11:37:29.603568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.968 [2024-07-26 11:37:29.603725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.968 [2024-07-26 11:37:29.603759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.968 [2024-07-26 11:37:29.603777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.968 [2024-07-26 11:37:29.603795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.968 [2024-07-26 11:37:29.603833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.968 qpair failed and we were unable to recover it. 00:29:33.968 [2024-07-26 11:37:29.613589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.968 [2024-07-26 11:37:29.613750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.968 [2024-07-26 11:37:29.613797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.968 [2024-07-26 11:37:29.613816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.968 [2024-07-26 11:37:29.613832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.968 [2024-07-26 11:37:29.613871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.968 qpair failed and we were unable to recover it. 00:29:33.968 [2024-07-26 11:37:29.623770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.968 [2024-07-26 11:37:29.623992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.968 [2024-07-26 11:37:29.624026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.968 [2024-07-26 11:37:29.624045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.968 [2024-07-26 11:37:29.624061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:33.968 [2024-07-26 11:37:29.624100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.968 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.633673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.633832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.633865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.633885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.633902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.633941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.643738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.644009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.644042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.644061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.644078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.644118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.653790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.654008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.654041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.654060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.654077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.654116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.663870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.664032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.664064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.664084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.664101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.664139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.673780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.673937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.673970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.673988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.674013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.674052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.683861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.684042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.684076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.684094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.684110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.684147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.693850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.693998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.694031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.694050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.694066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.694107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.703863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.704062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.704096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.704119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.704138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.704177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.713888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.714041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.714074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.714092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.714109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.714148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.723913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.724070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.724104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.724123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.724140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.724177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.733933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.734074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.228 [2024-07-26 11:37:29.734107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.228 [2024-07-26 11:37:29.734125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.228 [2024-07-26 11:37:29.734143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.228 [2024-07-26 11:37:29.734182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.228 qpair failed and we were unable to recover it. 00:29:34.228 [2024-07-26 11:37:29.744004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.228 [2024-07-26 11:37:29.744164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.744197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.744215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.744231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.744271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.754028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.754190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.754224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.754242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.754259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.754298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.764124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.764280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.764314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.764350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.764368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.764408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.774184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.774339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.774372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.774390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.774407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.774455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.784134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.784296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.784329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.784347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.784364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.784401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.794180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.794348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.794382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.794401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.794418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.794491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.804203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.804365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.804398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.804416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.804442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.804495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.814279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.814505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.814534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.814550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.814564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.814599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.824254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.824426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.824481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.824497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.824512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.824545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.834245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.834402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.834446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.834481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.834496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.834530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.844308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.844481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.844515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.844531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.844545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.844579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.854307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.854478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.854512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.854528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.854543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.854576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.864389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.864564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.864593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.864609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.229 [2024-07-26 11:37:29.864623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.229 [2024-07-26 11:37:29.864657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-07-26 11:37:29.874407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.229 [2024-07-26 11:37:29.874584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.229 [2024-07-26 11:37:29.874613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.229 [2024-07-26 11:37:29.874629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.230 [2024-07-26 11:37:29.874644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.230 [2024-07-26 11:37:29.874697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.230 qpair failed and we were unable to recover it. 00:29:34.230 [2024-07-26 11:37:29.884424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.230 [2024-07-26 11:37:29.884581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.230 [2024-07-26 11:37:29.884610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.230 [2024-07-26 11:37:29.884626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.230 [2024-07-26 11:37:29.884640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.230 [2024-07-26 11:37:29.884690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.230 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-26 11:37:29.894503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.489 [2024-07-26 11:37:29.894640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.489 [2024-07-26 11:37:29.894669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.489 [2024-07-26 11:37:29.894685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.489 [2024-07-26 11:37:29.894699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.489 [2024-07-26 11:37:29.894762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-26 11:37:29.904515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.489 [2024-07-26 11:37:29.904693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.489 [2024-07-26 11:37:29.904740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.489 [2024-07-26 11:37:29.904759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.489 [2024-07-26 11:37:29.904777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.489 [2024-07-26 11:37:29.904816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.489 qpair failed and we were unable to recover it. 00:29:34.489 [2024-07-26 11:37:29.914511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.489 [2024-07-26 11:37:29.914646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.489 [2024-07-26 11:37:29.914675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.489 [2024-07-26 11:37:29.914690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.489 [2024-07-26 11:37:29.914705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.914754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.924580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.924777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.924810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.924829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.924845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.924883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.934682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.934858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.934891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.934910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.934926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.934966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.944759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.944944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.944986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.945006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.945022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.945062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.954750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.954926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.954959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.954978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.954995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.955033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.964750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.964906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.964939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.964958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.964975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.965013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.974673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.974815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.974849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.974867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.974884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.974922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.984726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.984884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.984917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.984935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.984953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.984998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:29.994734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:29.994915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:29.994949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:29.994968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:29.994985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:29.995023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:30.004842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:30.005043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:30.005077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:30.005095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:30.005112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:30.005152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:30.014801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:30.014963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:30.015000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:30.015020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:30.015037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:30.015078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:30.024855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:30.025012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:30.025048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:30.025067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:30.025084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:30.025124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:30.034980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:30.035199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:30.035233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:30.035252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:30.035269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:30.035309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.490 [2024-07-26 11:37:30.044925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.490 [2024-07-26 11:37:30.045084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.490 [2024-07-26 11:37:30.045146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.490 [2024-07-26 11:37:30.045167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.490 [2024-07-26 11:37:30.045184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.490 [2024-07-26 11:37:30.045223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.490 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.054886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.055037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.055071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.055091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.055108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.055146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.065058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.065233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.065267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.065285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.065303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.065342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.075000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.075155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.075189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.075208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.075233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.075275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.085066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.085223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.085257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.085275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.085292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.085332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.095064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.095225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.095260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.095279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.095296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.095335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.105131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.105290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.105324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.105343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.105360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.105399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.115131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.115295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.115337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.115356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.115372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.115412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.125141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.125304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.125338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.125357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.125374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.125413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.135258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.135467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.135512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.135528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.135543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.135577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.491 [2024-07-26 11:37:30.145255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.491 [2024-07-26 11:37:30.145414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.491 [2024-07-26 11:37:30.145471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.491 [2024-07-26 11:37:30.145489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.491 [2024-07-26 11:37:30.145503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.491 [2024-07-26 11:37:30.145536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.491 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.155252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.155402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.155444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.155478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.155493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.155526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.165273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.165441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.165487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.165511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.165526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.165560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.175314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.175505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.175534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.175550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.175564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.175599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.185330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.185495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.185524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.185540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.185554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.185587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.195362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.195551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.195580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.195596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.195610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.195644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.205366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.205534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.205562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.205578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.205593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.205627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.215411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.215615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.215644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.215660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.215674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.215724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.225423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.225617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.225646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.225662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.225694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.225735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.751 qpair failed and we were unable to recover it. 00:29:34.751 [2024-07-26 11:37:30.235501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.751 [2024-07-26 11:37:30.235647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.751 [2024-07-26 11:37:30.235675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.751 [2024-07-26 11:37:30.235691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.751 [2024-07-26 11:37:30.235723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.751 [2024-07-26 11:37:30.235762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.245499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.245640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.245683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.245703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.245719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.245760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.255527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.255665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.255694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.255716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.255732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.255784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.265564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.265721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.265754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.265773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.265790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.265828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.275563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.275710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.275739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.275772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.275789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.275827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.285634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.285809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.285842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.285861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.285878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.285917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.295664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.295850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.295884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.295902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.295918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.295957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.305686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.305852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.305885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.305904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.305920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.305958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.315677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.315843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.315876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.315894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.315911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.315949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.325734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.325887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.325920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.325939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.325955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.325993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.335800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.335986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.336020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.336039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.336056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.336095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.345838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.345995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.346035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.346055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.346071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.346120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.355781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.355940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.355974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.355992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.356009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.356047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.365829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.752 [2024-07-26 11:37:30.365985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.752 [2024-07-26 11:37:30.366027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.752 [2024-07-26 11:37:30.366046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.752 [2024-07-26 11:37:30.366063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.752 [2024-07-26 11:37:30.366104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.752 qpair failed and we were unable to recover it. 00:29:34.752 [2024-07-26 11:37:30.375901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.753 [2024-07-26 11:37:30.376051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.753 [2024-07-26 11:37:30.376095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.753 [2024-07-26 11:37:30.376114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.753 [2024-07-26 11:37:30.376130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.753 [2024-07-26 11:37:30.376171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.753 qpair failed and we were unable to recover it. 00:29:34.753 [2024-07-26 11:37:30.385928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.753 [2024-07-26 11:37:30.386140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.753 [2024-07-26 11:37:30.386173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.753 [2024-07-26 11:37:30.386191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.753 [2024-07-26 11:37:30.386207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.753 [2024-07-26 11:37:30.386254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.753 qpair failed and we were unable to recover it. 00:29:34.753 [2024-07-26 11:37:30.395902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.753 [2024-07-26 11:37:30.396057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.753 [2024-07-26 11:37:30.396091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.753 [2024-07-26 11:37:30.396109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.753 [2024-07-26 11:37:30.396127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.753 [2024-07-26 11:37:30.396166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.753 qpair failed and we were unable to recover it. 00:29:34.753 [2024-07-26 11:37:30.405995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.753 [2024-07-26 11:37:30.406155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.753 [2024-07-26 11:37:30.406188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.753 [2024-07-26 11:37:30.406206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.753 [2024-07-26 11:37:30.406223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:34.753 [2024-07-26 11:37:30.406264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:34.753 qpair failed and we were unable to recover it. 00:29:35.013 [2024-07-26 11:37:30.416014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.013 [2024-07-26 11:37:30.416170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.013 [2024-07-26 11:37:30.416212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.013 [2024-07-26 11:37:30.416230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.013 [2024-07-26 11:37:30.416246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.013 [2024-07-26 11:37:30.416284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.013 qpair failed and we were unable to recover it. 00:29:35.013 [2024-07-26 11:37:30.425990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.013 [2024-07-26 11:37:30.426147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.013 [2024-07-26 11:37:30.426180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.013 [2024-07-26 11:37:30.426199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.013 [2024-07-26 11:37:30.426216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.013 [2024-07-26 11:37:30.426256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.013 qpair failed and we were unable to recover it. 00:29:35.013 [2024-07-26 11:37:30.436140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.013 [2024-07-26 11:37:30.436324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.013 [2024-07-26 11:37:30.436364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.013 [2024-07-26 11:37:30.436384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.013 [2024-07-26 11:37:30.436401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.013 [2024-07-26 11:37:30.436448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.013 qpair failed and we were unable to recover it. 00:29:35.013 [2024-07-26 11:37:30.446096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.013 [2024-07-26 11:37:30.446266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.013 [2024-07-26 11:37:30.446300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.013 [2024-07-26 11:37:30.446318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.013 [2024-07-26 11:37:30.446335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.013 [2024-07-26 11:37:30.446373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.013 qpair failed and we were unable to recover it. 00:29:35.013 [2024-07-26 11:37:30.456090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.456253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.456285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.456303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.456320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.456360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.466157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.466331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.466362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.466380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.466396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.466441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.476199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.476362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.476404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.476422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.476455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.476511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.486196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.486378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.486411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.486442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.486477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.486511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.496206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.496364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.496398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.496417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.496444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.496496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.506225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.506388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.506421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.506449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.506481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.506516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.516261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.516420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.516475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.516492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.516506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.516539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.526268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.526496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.526524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.526541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.526555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.526588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.536333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.536522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.536551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.536566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.536581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.536616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.546341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.546515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.546543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.546559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.546573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.546606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.556379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.556562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.556591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.014 [2024-07-26 11:37:30.556607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.014 [2024-07-26 11:37:30.556621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.014 [2024-07-26 11:37:30.556654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.014 qpair failed and we were unable to recover it. 00:29:35.014 [2024-07-26 11:37:30.566401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.014 [2024-07-26 11:37:30.566578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.014 [2024-07-26 11:37:30.566606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.566629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.566645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.566707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.576393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.576532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.576560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.576576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.576590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.576624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.586499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.586639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.586668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.586684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.586716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.586757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.596527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.596688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.596732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.596751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.596768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.596806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.606547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.606710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.606744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.606763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.606779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.606819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.616576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.616706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.616735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.616751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.616781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.616822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.626577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.626722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.626754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.626773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.626790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.626828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.636596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.636767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.636800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.636818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.636835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.636874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.646605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.646761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.646794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.646813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.646829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.646867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.656655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.656818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.656851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.656877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.656895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.656946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.015 [2024-07-26 11:37:30.666680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.015 [2024-07-26 11:37:30.666849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.015 [2024-07-26 11:37:30.666882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.015 [2024-07-26 11:37:30.666900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.015 [2024-07-26 11:37:30.666917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.015 [2024-07-26 11:37:30.666964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.015 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.676767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.676928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.676962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.676981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.676998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.677043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.686711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.686894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.686928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.686946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.686963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.687002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.696818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.696981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.697014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.697033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.697050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.697087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.706797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.706955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.706988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.707006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.707022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.707061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.716835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.716989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.717023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.717041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.717058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.717096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.726834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.726987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.727020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.727039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.727056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.727093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.736877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.737026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.737060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.737078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.737095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.737133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.746922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.747085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.747124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.747144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.275 [2024-07-26 11:37:30.747160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.275 [2024-07-26 11:37:30.747198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.275 qpair failed and we were unable to recover it. 00:29:35.275 [2024-07-26 11:37:30.756943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.275 [2024-07-26 11:37:30.757134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.275 [2024-07-26 11:37:30.757167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.275 [2024-07-26 11:37:30.757185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.757202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.757241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.766978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.767129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.767162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.767180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.767197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.767236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.777027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.777194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.777227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.777247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.777263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.777301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.787072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.787236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.787269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.787288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.787306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.787352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.797126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.797284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.797318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.797337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.797354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.797393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.807110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.807263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.807297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.807316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.807332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.807371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.817137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.817304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.817337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.817356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.817373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.817413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.827171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.827337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.827370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.827388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.827405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.827452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.837206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.837368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.837408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.837435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.837454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.837505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.847235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.847423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.847483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.847499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.847513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.847547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.857246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.857401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.857443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.857479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.857493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.857527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.867288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.867508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.867537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.867553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.867567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.867601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.877319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.877504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.877533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.877549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.276 [2024-07-26 11:37:30.877570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.276 [2024-07-26 11:37:30.877603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.276 qpair failed and we were unable to recover it. 00:29:35.276 [2024-07-26 11:37:30.887357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.276 [2024-07-26 11:37:30.887538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.276 [2024-07-26 11:37:30.887566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.276 [2024-07-26 11:37:30.887582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.277 [2024-07-26 11:37:30.887596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.277 [2024-07-26 11:37:30.887629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-26 11:37:30.897450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.277 [2024-07-26 11:37:30.897604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.277 [2024-07-26 11:37:30.897632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.277 [2024-07-26 11:37:30.897648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.277 [2024-07-26 11:37:30.897662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.277 [2024-07-26 11:37:30.897695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-26 11:37:30.907419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.277 [2024-07-26 11:37:30.907601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.277 [2024-07-26 11:37:30.907629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.277 [2024-07-26 11:37:30.907645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.277 [2024-07-26 11:37:30.907660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.277 [2024-07-26 11:37:30.907694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-26 11:37:30.917531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.277 [2024-07-26 11:37:30.917667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.277 [2024-07-26 11:37:30.917696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.277 [2024-07-26 11:37:30.917729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.277 [2024-07-26 11:37:30.917747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.277 [2024-07-26 11:37:30.917786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.277 [2024-07-26 11:37:30.927487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.277 [2024-07-26 11:37:30.927630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.277 [2024-07-26 11:37:30.927659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.277 [2024-07-26 11:37:30.927675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.277 [2024-07-26 11:37:30.927689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.277 [2024-07-26 11:37:30.927740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.277 qpair failed and we were unable to recover it. 00:29:35.536 [2024-07-26 11:37:30.937515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.536 [2024-07-26 11:37:30.937646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.536 [2024-07-26 11:37:30.937675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.536 [2024-07-26 11:37:30.937690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.937704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.937760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:30.947567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:30.947739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:30.947772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:30.947790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.947807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.947846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:30.957552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:30.957689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:30.957718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:30.957734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.957748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.957800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:30.967575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:30.967731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:30.967763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:30.967781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.967805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.967846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:30.977602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:30.977736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:30.977785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:30.977803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.977820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.977860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:30.987645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:30.987808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:30.987842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:30.987861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.987878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.987917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:30.997660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:30.997810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:30.997844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:30.997863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:30.997880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:30.997918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.007784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.007945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.007978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.007996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.008013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:31.008054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.017768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.017950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.017984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.018002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.018019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:31.018058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.027794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.027959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.027992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.028011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.028028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:31.028067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.037800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.037970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.038004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.038023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.038040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:31.038080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.047820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.047977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.048011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.048029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.048047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:31.048085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.057849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.057999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.058032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.058059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.058077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.537 [2024-07-26 11:37:31.058114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.537 qpair failed and we were unable to recover it. 00:29:35.537 [2024-07-26 11:37:31.067872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.537 [2024-07-26 11:37:31.068021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.537 [2024-07-26 11:37:31.068054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.537 [2024-07-26 11:37:31.068072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.537 [2024-07-26 11:37:31.068089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.068127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.077914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.078074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.078108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.078127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.078143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.078181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.087933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.088084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.088117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.088136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.088153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.088191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.097947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.098122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.098155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.098174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.098190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.098229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.108001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.108171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.108208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.108227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.108243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.108288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.118039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.118195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.118229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.118248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.118264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.118302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.128071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.128227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.128262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.128281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.128297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.128337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.138090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.138247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.138282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.138301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.138318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.138356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.148122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.148288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.148328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.148348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.148365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.148404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.158164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.158352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.158385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.158404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.158420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.158483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.168191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.168352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.168385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.168403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.168419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.168482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.178225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.178377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.178411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.178437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.178471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.178505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.538 [2024-07-26 11:37:31.188260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.538 [2024-07-26 11:37:31.188421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.538 [2024-07-26 11:37:31.188476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.538 [2024-07-26 11:37:31.188493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.538 [2024-07-26 11:37:31.188507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.538 [2024-07-26 11:37:31.188547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.538 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.198259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.198408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.198451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.198484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.198498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.198531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.208329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.208519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.208548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.208564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.208578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.208612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.218329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.218499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.218528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.218544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.218558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.218591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.228363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.228560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.228589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.228605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.228619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.228652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.238376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.238548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.238583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.238600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.238614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.238647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.248419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.248615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.248643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.248659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.248690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.248729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.258476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.258613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.258642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.258658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.258672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.258721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.268519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.268663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.268715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.268733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.268750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.268789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.278524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.278668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.278697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.278712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.278751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.798 [2024-07-26 11:37:31.278793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.798 qpair failed and we were unable to recover it. 00:29:35.798 [2024-07-26 11:37:31.288567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.798 [2024-07-26 11:37:31.288749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.798 [2024-07-26 11:37:31.288784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.798 [2024-07-26 11:37:31.288803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.798 [2024-07-26 11:37:31.288819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.288858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.298631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.298786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.298819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.298838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.298855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.298893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.308586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.308756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.308789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.308807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.308824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.308863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.318648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.318806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.318840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.318859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.318876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.318914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.328703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.328896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.328930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.328948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.328965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.329003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.338716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.338867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.338901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.338920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.338937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.338974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.348791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.348981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.349015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.349033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.349050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.349090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.358746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.358904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.358938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.358959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.358976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.359017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.368921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.369081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.369114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.369132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.369159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.369200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.378853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.379057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.379091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.379111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.379128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.379166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.388931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.389143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.389176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.389195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.389211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.389250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.398868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.399020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.399054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.399073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.399090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.399127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.408886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.409032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.409065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.409083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.409100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.409137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.418994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.799 [2024-07-26 11:37:31.419156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.799 [2024-07-26 11:37:31.419189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.799 [2024-07-26 11:37:31.419208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.799 [2024-07-26 11:37:31.419224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.799 [2024-07-26 11:37:31.419261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.799 qpair failed and we were unable to recover it. 00:29:35.799 [2024-07-26 11:37:31.428965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-26 11:37:31.429128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-26 11:37:31.429161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-26 11:37:31.429180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-26 11:37:31.429196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.800 [2024-07-26 11:37:31.429236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-26 11:37:31.438995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-26 11:37:31.439150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-26 11:37:31.439184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-26 11:37:31.439203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-26 11:37:31.439220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.800 [2024-07-26 11:37:31.439258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.800 qpair failed and we were unable to recover it. 00:29:35.800 [2024-07-26 11:37:31.449097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.800 [2024-07-26 11:37:31.449271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.800 [2024-07-26 11:37:31.449304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.800 [2024-07-26 11:37:31.449323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.800 [2024-07-26 11:37:31.449340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:35.800 [2024-07-26 11:37:31.449378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.800 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.459101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.459299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.059 [2024-07-26 11:37:31.459332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.059 [2024-07-26 11:37:31.459358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.059 [2024-07-26 11:37:31.459377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.059 [2024-07-26 11:37:31.459415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.059 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.469152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.469315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.059 [2024-07-26 11:37:31.469346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.059 [2024-07-26 11:37:31.469364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.059 [2024-07-26 11:37:31.469379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.059 [2024-07-26 11:37:31.469417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.059 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.479111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.479300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.059 [2024-07-26 11:37:31.479334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.059 [2024-07-26 11:37:31.479352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.059 [2024-07-26 11:37:31.479369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.059 [2024-07-26 11:37:31.479408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.059 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.489206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.489362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.059 [2024-07-26 11:37:31.489395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.059 [2024-07-26 11:37:31.489414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.059 [2024-07-26 11:37:31.489439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.059 [2024-07-26 11:37:31.489493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.059 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.499212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.499385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.059 [2024-07-26 11:37:31.499418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.059 [2024-07-26 11:37:31.499447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.059 [2024-07-26 11:37:31.499479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.059 [2024-07-26 11:37:31.499512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.059 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.509304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.509497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.059 [2024-07-26 11:37:31.509527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.059 [2024-07-26 11:37:31.509542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.059 [2024-07-26 11:37:31.509556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.059 [2024-07-26 11:37:31.509590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.059 qpair failed and we were unable to recover it. 00:29:36.059 [2024-07-26 11:37:31.519206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.059 [2024-07-26 11:37:31.519370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.519403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.519421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.519447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.519499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.529253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.529414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.529455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.529487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.529501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.529536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.539289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.539511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.539540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.539556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.539570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.539603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.549314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.549524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.549558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.549576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.549590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.549622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.559370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.559552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.559581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.559597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.559612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.559644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.569350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.569516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.569544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.569560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.569574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.569607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.579404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.579603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.579632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.579648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.579662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.579710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.589447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.589626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.589654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.589670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.589684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.589744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.599473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.599620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.599648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.599664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.599678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.599731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.609501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.609653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.609700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.609718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.609735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.609774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.619521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.619662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.619690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.619724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.619741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.619779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.629597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.629781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.629814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.629832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.629849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.629887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.639612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.639744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.639799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.060 [2024-07-26 11:37:31.639819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.060 [2024-07-26 11:37:31.639836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.060 [2024-07-26 11:37:31.639874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.060 qpair failed and we were unable to recover it. 00:29:36.060 [2024-07-26 11:37:31.649600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.060 [2024-07-26 11:37:31.649752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.060 [2024-07-26 11:37:31.649785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.649804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.649820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.649858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.061 [2024-07-26 11:37:31.659651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.061 [2024-07-26 11:37:31.659836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.061 [2024-07-26 11:37:31.659870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.659888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.659905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.659943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.061 [2024-07-26 11:37:31.669736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.061 [2024-07-26 11:37:31.669898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.061 [2024-07-26 11:37:31.669932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.669950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.669966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.670005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.061 [2024-07-26 11:37:31.679695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.061 [2024-07-26 11:37:31.679857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.061 [2024-07-26 11:37:31.679890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.679909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.679925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.679971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.061 [2024-07-26 11:37:31.689758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.061 [2024-07-26 11:37:31.689943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.061 [2024-07-26 11:37:31.689977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.689996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.690013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.690054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.061 [2024-07-26 11:37:31.699825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.061 [2024-07-26 11:37:31.699983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.061 [2024-07-26 11:37:31.700017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.700036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.700053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.700093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.061 [2024-07-26 11:37:31.709820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.061 [2024-07-26 11:37:31.709990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.061 [2024-07-26 11:37:31.710023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.061 [2024-07-26 11:37:31.710041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.061 [2024-07-26 11:37:31.710058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.061 [2024-07-26 11:37:31.710096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.061 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.719783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.719916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.719944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.719960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.719974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.720009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.729853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.730048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.730081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.730101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.730118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.730155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.739867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.740048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.740082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.740100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.740117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.740156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.749952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.750112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.750145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.750163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.750180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.750218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.759973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.760127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.760160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.760178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.760195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.760233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.769968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.770130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.770162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.770181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.770205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.770244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.780031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.780203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.780237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.780255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.780272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.780310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.790090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.790296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.790329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.790348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.790365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.790403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.800209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.800396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.800439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.800482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.800496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.800530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.810125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.810374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.810407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.810425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.810470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.810506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.820190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.820379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.820412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.820440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.321 [2024-07-26 11:37:31.820482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.321 [2024-07-26 11:37:31.820516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.321 qpair failed and we were unable to recover it. 00:29:36.321 [2024-07-26 11:37:31.830218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.321 [2024-07-26 11:37:31.830382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.321 [2024-07-26 11:37:31.830415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.321 [2024-07-26 11:37:31.830442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.830475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.830512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.840292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.840476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.840505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.840520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.840535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.840568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.850260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.850410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.850450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.850485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.850500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.850533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.860250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.860407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.860448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.860489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.860504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.860537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.870302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.870499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.870528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.870544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.870558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.870591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.880387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.880557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.880586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.880602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.880616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.880649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.890381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.890571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.890599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.890615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.890629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.890678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.900440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.900606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.900635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.900651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.900666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.900698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.910425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.910595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.910624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.910639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.910654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.910687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.920503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.920683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.920717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.920735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.920752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.920790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.930547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.930691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.930737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.930756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.930772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.930813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.940542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.940675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.940703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.940719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.940733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.940781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.950565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.950718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.950752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.950779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.322 [2024-07-26 11:37:31.950797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.322 [2024-07-26 11:37:31.950836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.322 qpair failed and we were unable to recover it. 00:29:36.322 [2024-07-26 11:37:31.960581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.322 [2024-07-26 11:37:31.960784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.322 [2024-07-26 11:37:31.960817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.322 [2024-07-26 11:37:31.960837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.323 [2024-07-26 11:37:31.960853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.323 [2024-07-26 11:37:31.960903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.323 qpair failed and we were unable to recover it. 00:29:36.323 [2024-07-26 11:37:31.970708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.323 [2024-07-26 11:37:31.970888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.323 [2024-07-26 11:37:31.970921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.323 [2024-07-26 11:37:31.970940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.323 [2024-07-26 11:37:31.970957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.323 [2024-07-26 11:37:31.970996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.323 qpair failed and we were unable to recover it. 00:29:36.323 [2024-07-26 11:37:31.980678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.582 [2024-07-26 11:37:31.980922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:31.980955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:31.980974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:31.980989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:31.981026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:31.990895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:31.991127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:31.991160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:31.991179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:31.991196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:31.991234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.000723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.000925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.000959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.000978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.000995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.001034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.010728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.010891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.010925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.010944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.010960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.010999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.020801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.020984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.021017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.021036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.021052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.021092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.030866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.031022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.031055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.031073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.031090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.031129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.040893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.041038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.041079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.041098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.041115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.041156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.050902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.051059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.051091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.051110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.051128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.051166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.060954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.061099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.061133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.061151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.061168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.061206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.070908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.071073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.071107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.071126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.071142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.071180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.080958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.081159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.081194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.081212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.081230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.081276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.091024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.091220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.583 [2024-07-26 11:37:32.091253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.583 [2024-07-26 11:37:32.091272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.583 [2024-07-26 11:37:32.091289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.583 [2024-07-26 11:37:32.091329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.583 qpair failed and we were unable to recover it. 00:29:36.583 [2024-07-26 11:37:32.101042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.583 [2024-07-26 11:37:32.101241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.101274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.101293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.101310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.101347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.111087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.111273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.111307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.111326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.111342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.111380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.121190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.121382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.121414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.121441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.121463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.121515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.131087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.131256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.131292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.131309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.131322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.131356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.141123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.141277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.141311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.141330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.141346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.141384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.151278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.151475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.151503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.151519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.151533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.151567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.161162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.161322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.161355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.161373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.161389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.161440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.171236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.171445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.171490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.171507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.171528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.171563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.181300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.181455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.181503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.181519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.181534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.181567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.191267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.191425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.191482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.191499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.191513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.191547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.201349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.201526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.201555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.201571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.201585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.201618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.211436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.211606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.211635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.211651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.211681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.211722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.221401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.221617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.221646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.584 [2024-07-26 11:37:32.221662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.584 [2024-07-26 11:37:32.221694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.584 [2024-07-26 11:37:32.221733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.584 qpair failed and we were unable to recover it. 00:29:36.584 [2024-07-26 11:37:32.231451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.584 [2024-07-26 11:37:32.231646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.584 [2024-07-26 11:37:32.231691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.585 [2024-07-26 11:37:32.231710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.585 [2024-07-26 11:37:32.231727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.585 [2024-07-26 11:37:32.231766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.585 qpair failed and we were unable to recover it. 00:29:36.585 [2024-07-26 11:37:32.241406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.585 [2024-07-26 11:37:32.241615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.585 [2024-07-26 11:37:32.241644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.585 [2024-07-26 11:37:32.241660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.585 [2024-07-26 11:37:32.241691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.585 [2024-07-26 11:37:32.241732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.585 qpair failed and we were unable to recover it. 00:29:36.844 [2024-07-26 11:37:32.251501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.844 [2024-07-26 11:37:32.251653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.844 [2024-07-26 11:37:32.251701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.844 [2024-07-26 11:37:32.251720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.844 [2024-07-26 11:37:32.251736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.844 [2024-07-26 11:37:32.251776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.844 qpair failed and we were unable to recover it. 00:29:36.844 [2024-07-26 11:37:32.261533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.844 [2024-07-26 11:37:32.261668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.844 [2024-07-26 11:37:32.261716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.844 [2024-07-26 11:37:32.261741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.844 [2024-07-26 11:37:32.261760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.844 [2024-07-26 11:37:32.261799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.844 qpair failed and we were unable to recover it. 00:29:36.844 [2024-07-26 11:37:32.271551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.844 [2024-07-26 11:37:32.271691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.844 [2024-07-26 11:37:32.271738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.844 [2024-07-26 11:37:32.271757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.844 [2024-07-26 11:37:32.271773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.844 [2024-07-26 11:37:32.271812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.844 qpair failed and we were unable to recover it. 00:29:36.844 [2024-07-26 11:37:32.281554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.844 [2024-07-26 11:37:32.281703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.844 [2024-07-26 11:37:32.281731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.844 [2024-07-26 11:37:32.281766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.844 [2024-07-26 11:37:32.281782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.844 [2024-07-26 11:37:32.281820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.844 qpair failed and we were unable to recover it. 00:29:36.844 [2024-07-26 11:37:32.291588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.844 [2024-07-26 11:37:32.291766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.844 [2024-07-26 11:37:32.291798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.291817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.291834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.291872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.301628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.301777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.301810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.301829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.301846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.301883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.311790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.311958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.311992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.312010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.312027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.312067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.321689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.321837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.321878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.321897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.321914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.321951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.331694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.331860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.331893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.331911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.331928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.331967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.341785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.341939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.341972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.341991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.342008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.342045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.351859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.352019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.352054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.352081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.352099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.352138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.361824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.362031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.362064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.362083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.362099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.362137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.371890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.372047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.372080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.372099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.372115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.372154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.381823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.381993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.382027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.382046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.382063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.382101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.391870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.392029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.392062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.392081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.392097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.392136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.401929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.402088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.402121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.402139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.402155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.402192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.411932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.412085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.412118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.845 [2024-07-26 11:37:32.412137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.845 [2024-07-26 11:37:32.412154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.845 [2024-07-26 11:37:32.412192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.845 qpair failed and we were unable to recover it. 00:29:36.845 [2024-07-26 11:37:32.421976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.845 [2024-07-26 11:37:32.422189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.845 [2024-07-26 11:37:32.422222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.422240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.422256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.422293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.432016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.432173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.432206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.432224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.432240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.432279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.442081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.442272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.442311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.442332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.442348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.442386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.452066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.452224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.452257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.452275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.452291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.452329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.462131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.462281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.462314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.462333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.462349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.462387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.472149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.472329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.472367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.472384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.472400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.472450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.482159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.482328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.482362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.482381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.482398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.482469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.492221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.492442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.492488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.492505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.492519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.492553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:36.846 [2024-07-26 11:37:32.502350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.846 [2024-07-26 11:37:32.502537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.846 [2024-07-26 11:37:32.502566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.846 [2024-07-26 11:37:32.502581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.846 [2024-07-26 11:37:32.502594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:36.846 [2024-07-26 11:37:32.502629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.846 qpair failed and we were unable to recover it. 00:29:37.105 [2024-07-26 11:37:32.512300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.105 [2024-07-26 11:37:32.512482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.105 [2024-07-26 11:37:32.512510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.105 [2024-07-26 11:37:32.512525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.105 [2024-07-26 11:37:32.512540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.105 [2024-07-26 11:37:32.512575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.105 qpair failed and we were unable to recover it. 00:29:37.105 [2024-07-26 11:37:32.522280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.105 [2024-07-26 11:37:32.522445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.105 [2024-07-26 11:37:32.522491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.105 [2024-07-26 11:37:32.522507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.105 [2024-07-26 11:37:32.522522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.105 [2024-07-26 11:37:32.522555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.105 qpair failed and we were unable to recover it. 00:29:37.105 [2024-07-26 11:37:32.532401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.105 [2024-07-26 11:37:32.532581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.105 [2024-07-26 11:37:32.532614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.105 [2024-07-26 11:37:32.532631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.105 [2024-07-26 11:37:32.532645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.105 [2024-07-26 11:37:32.532696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.105 qpair failed and we were unable to recover it. 00:29:37.105 [2024-07-26 11:37:32.542334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.105 [2024-07-26 11:37:32.542533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.105 [2024-07-26 11:37:32.542562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.105 [2024-07-26 11:37:32.542578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.105 [2024-07-26 11:37:32.542593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.105 [2024-07-26 11:37:32.542625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.105 qpair failed and we were unable to recover it. 00:29:37.105 [2024-07-26 11:37:32.552373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.105 [2024-07-26 11:37:32.552553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.105 [2024-07-26 11:37:32.552581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.105 [2024-07-26 11:37:32.552597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.105 [2024-07-26 11:37:32.552611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.105 [2024-07-26 11:37:32.552645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.105 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.562422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.562615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.562644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.562660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.562674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.562722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.572475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.572618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.572646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.572662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.572701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.572742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.582474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.582601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.582637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.582653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.582684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.582723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.592543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.592685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.592713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.592728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.592743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.592786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.602578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.602753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.602787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.602806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.602823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.602862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.612644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.612821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.612854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.612872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.612889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.612928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.622556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.622694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.622722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.622755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.622772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.622811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.632719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.632890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.632924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.632942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.632959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.633000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.642635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.642799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.642833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.642851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.642868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.642905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.652721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.652931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.652964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.652982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.652999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.653038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.662722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.662896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.662929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.662948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.662971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.663011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.672785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.673017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.673051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.673070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.673086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cbc000b90 00:29:37.106 [2024-07-26 11:37:32.673125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.682833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.106 [2024-07-26 11:37:32.683018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.106 [2024-07-26 11:37:32.683059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.106 [2024-07-26 11:37:32.683081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.106 [2024-07-26 11:37:32.683099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0cb4000b90 00:29:37.106 [2024-07-26 11:37:32.683140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.106 qpair failed and we were unable to recover it. 00:29:37.106 [2024-07-26 11:37:32.683239] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:37.107 A controller has encountered a failure and is being reset. 00:29:37.107 [2024-07-26 11:37:32.683304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5beb00 (9): Bad file descriptor 00:29:37.365 Controller properly reset. 00:29:37.365 Initializing NVMe Controllers 00:29:37.365 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:37.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:37.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:37.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:37.365 Initialization complete. Launching workers. 00:29:37.365 Starting thread on core 1 00:29:37.365 Starting thread on core 2 00:29:37.365 Starting thread on core 3 00:29:37.365 Starting thread on core 0 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:37.365 00:29:37.365 real 0m10.956s 00:29:37.365 user 0m18.698s 00:29:37.365 sys 0m5.591s 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.365 ************************************ 00:29:37.365 END TEST nvmf_target_disconnect_tc2 00:29:37.365 ************************************ 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.365 rmmod nvme_tcp 00:29:37.365 rmmod nvme_fabrics 00:29:37.365 rmmod nvme_keyring 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2236181 ']' 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2236181 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2236181 ']' 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2236181 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236181 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236181' 00:29:37.365 killing process with pid 2236181 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2236181 00:29:37.365 11:37:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2236181 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.932 11:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.838 11:37:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.838 00:29:39.838 real 0m16.320s 00:29:39.838 user 0m45.116s 00:29:39.838 sys 0m7.959s 00:29:39.838 11:37:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.838 11:37:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:39.838 ************************************ 00:29:39.838 END TEST nvmf_target_disconnect 00:29:39.838 ************************************ 00:29:39.838 11:37:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:39.838 00:29:39.838 real 5m40.322s 00:29:39.838 user 12m12.079s 00:29:39.838 sys 1m26.261s 00:29:39.838 11:37:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.838 11:37:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.838 ************************************ 00:29:39.838 END TEST nvmf_host 00:29:39.838 ************************************ 00:29:39.838 00:29:39.838 real 22m1.507s 00:29:39.838 user 52m6.700s 00:29:39.838 sys 5m38.806s 00:29:39.838 11:37:35 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.838 11:37:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.838 ************************************ 00:29:39.838 END TEST nvmf_tcp 00:29:39.838 ************************************ 00:29:39.838 11:37:35 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:39.838 11:37:35 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.838 11:37:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.838 11:37:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.838 11:37:35 -- common/autotest_common.sh@10 -- # set +x 00:29:39.838 ************************************ 00:29:39.838 START TEST spdkcli_nvmf_tcp 00:29:39.838 ************************************ 00:29:39.838 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:40.097 * Looking for test storage... 00:29:40.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2237376 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2237376 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2237376 ']' 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.097 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.097 [2024-07-26 11:37:35.608298] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:29:40.097 [2024-07-26 11:37:35.608400] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237376 ] 00:29:40.097 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.097 [2024-07-26 11:37:35.684944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:40.356 [2024-07-26 11:37:35.808537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.356 [2024-07-26 11:37:35.808541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.356 11:37:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:40.356 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:40.356 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:40.356 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:40.356 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:40.356 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:40.356 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:40.356 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.356 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.356 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:40.356 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:40.356 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:40.356 ' 00:29:43.645 [2024-07-26 11:37:38.565045] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.211 [2024-07-26 11:37:39.805398] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:46.737 [2024-07-26 11:37:42.092511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:48.635 [2024-07-26 11:37:44.062849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:50.008 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:50.008 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:50.008 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:50.008 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:50.008 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:50.008 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:50.008 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:50.008 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:50.008 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:50.008 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:50.008 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:50.008 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:50.266 11:37:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.831 11:37:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:50.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:50.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:50.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:50.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:50.832 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:50.832 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:50.832 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:50.832 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:50.832 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:50.832 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:50.832 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:50.832 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:50.832 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:50.832 ' 00:29:56.091 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:56.091 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:56.091 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:56.091 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:56.091 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:56.091 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:56.091 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:56.091 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:56.091 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:56.091 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:56.091 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:56.091 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:56.091 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:56.091 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2237376 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2237376 ']' 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2237376 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2237376 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2237376' 00:29:56.091 killing process with pid 2237376 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2237376 00:29:56.091 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2237376 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2237376 ']' 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2237376 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2237376 ']' 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2237376 00:29:56.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2237376) - No such process 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2237376 is not found' 00:29:56.349 Process with pid 2237376 is not found 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:56.349 00:29:56.349 real 0m16.450s 00:29:56.349 user 0m34.951s 00:29:56.349 sys 0m0.861s 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.349 11:37:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.349 ************************************ 00:29:56.349 END TEST spdkcli_nvmf_tcp 00:29:56.349 ************************************ 00:29:56.349 11:37:51 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:56.349 11:37:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:56.349 11:37:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:56.349 11:37:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.349 ************************************ 00:29:56.349 START TEST nvmf_identify_passthru 00:29:56.349 ************************************ 00:29:56.349 11:37:51 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:56.608 * Looking for test storage... 00:29:56.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.608 11:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.608 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.608 11:37:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.608 11:37:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.608 11:37:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.608 11:37:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.608 11:37:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.609 11:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.609 11:37:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.609 11:37:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.609 11:37:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:56.609 11:37:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.609 11:37:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.609 11:37:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:56.609 11:37:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.609 11:37:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.609 11:37:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:59.139 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:59.139 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:59.139 Found net devices under 0000:84:00.0: cvl_0_0 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.139 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:59.140 Found net devices under 0000:84:00.1: cvl_0_1 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:29:59.140 00:29:59.140 --- 10.0.0.2 ping statistics --- 00:29:59.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.140 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:59.140 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:59.398 00:29:59.398 --- 10.0.0.1 ping statistics --- 00:29:59.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.398 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.398 11:37:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.398 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:59.398 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.398 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:59.398 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:29:59.399 11:37:54 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:29:59.399 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:29:59.399 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:29:59.399 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:29:59.399 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:59.399 11:37:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:59.399 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.638 11:37:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:30:03.638 11:37:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:03.638 11:37:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:03.638 11:37:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:03.638 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2242008 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:07.826 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2242008 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2242008 ']' 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.826 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.826 [2024-07-26 11:38:03.481298] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:30:07.826 [2024-07-26 11:38:03.481407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.087 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.087 [2024-07-26 11:38:03.589321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.088 [2024-07-26 11:38:03.715946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.088 [2024-07-26 11:38:03.716014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.088 [2024-07-26 11:38:03.716030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.088 [2024-07-26 11:38:03.716044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.088 [2024-07-26 11:38:03.716056] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.088 [2024-07-26 11:38:03.716157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.088 [2024-07-26 11:38:03.716248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.088 [2024-07-26 11:38:03.716302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.088 [2024-07-26 11:38:03.716305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:30:08.346 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.346 INFO: Log level set to 20 00:30:08.346 INFO: Requests: 00:30:08.346 { 00:30:08.346 "jsonrpc": "2.0", 00:30:08.346 "method": "nvmf_set_config", 00:30:08.346 "id": 1, 00:30:08.346 "params": { 00:30:08.346 "admin_cmd_passthru": { 00:30:08.346 "identify_ctrlr": true 00:30:08.346 } 00:30:08.346 } 00:30:08.346 } 00:30:08.346 00:30:08.346 INFO: response: 00:30:08.346 { 00:30:08.346 "jsonrpc": "2.0", 00:30:08.346 "id": 1, 00:30:08.346 "result": true 00:30:08.346 } 00:30:08.346 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.346 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.346 INFO: Setting log level to 20 00:30:08.346 INFO: Setting log level to 20 00:30:08.346 INFO: Log level set to 20 00:30:08.346 INFO: Log level set to 20 00:30:08.346 INFO: Requests: 00:30:08.346 { 00:30:08.346 "jsonrpc": "2.0", 00:30:08.346 "method": "framework_start_init", 00:30:08.346 "id": 1 00:30:08.346 } 00:30:08.346 00:30:08.346 INFO: Requests: 00:30:08.346 { 00:30:08.346 "jsonrpc": "2.0", 00:30:08.346 "method": "framework_start_init", 00:30:08.346 "id": 1 00:30:08.346 } 00:30:08.346 00:30:08.346 [2024-07-26 11:38:03.890788] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:08.346 INFO: response: 00:30:08.346 { 00:30:08.346 "jsonrpc": "2.0", 00:30:08.346 "id": 1, 00:30:08.346 "result": true 00:30:08.346 } 00:30:08.346 00:30:08.346 INFO: response: 00:30:08.346 { 00:30:08.346 "jsonrpc": "2.0", 00:30:08.346 "id": 1, 00:30:08.346 "result": true 00:30:08.346 } 00:30:08.346 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.346 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.346 INFO: Setting log level to 40 00:30:08.346 INFO: Setting log level to 40 00:30:08.346 INFO: Setting log level to 40 00:30:08.346 [2024-07-26 11:38:03.900929] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.346 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:08.346 11:38:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.346 11:38:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.629 Nvme0n1 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.629 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.629 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.629 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.629 [2024-07-26 11:38:06.800997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.629 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.629 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.629 [ 00:30:11.629 { 00:30:11.629 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:11.629 "subtype": "Discovery", 00:30:11.629 "listen_addresses": [], 00:30:11.629 "allow_any_host": true, 00:30:11.629 "hosts": [] 00:30:11.629 }, 00:30:11.629 { 00:30:11.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.629 "subtype": "NVMe", 00:30:11.629 "listen_addresses": [ 00:30:11.629 { 00:30:11.629 "trtype": "TCP", 00:30:11.629 "adrfam": "IPv4", 00:30:11.629 "traddr": "10.0.0.2", 00:30:11.629 "trsvcid": "4420" 00:30:11.629 } 00:30:11.629 ], 00:30:11.629 "allow_any_host": true, 00:30:11.629 "hosts": [], 00:30:11.629 "serial_number": "SPDK00000000000001", 00:30:11.629 "model_number": "SPDK bdev Controller", 00:30:11.629 "max_namespaces": 1, 00:30:11.629 "min_cntlid": 1, 00:30:11.629 "max_cntlid": 65519, 00:30:11.629 "namespaces": [ 00:30:11.629 { 00:30:11.629 "nsid": 1, 00:30:11.630 "bdev_name": "Nvme0n1", 00:30:11.630 "name": "Nvme0n1", 00:30:11.630 "nguid": "CD9270D0A8D74BE2965851814799AB3C", 00:30:11.630 "uuid": "cd9270d0-a8d7-4be2-9658-51814799ab3c" 00:30:11.630 } 00:30:11.630 ] 00:30:11.630 } 00:30:11.630 ] 00:30:11.630 11:38:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.630 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:11.630 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:11.630 11:38:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:11.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:11.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:11.630 11:38:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.630 rmmod nvme_tcp 00:30:11.630 rmmod nvme_fabrics 00:30:11.630 rmmod nvme_keyring 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2242008 ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2242008 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2242008 ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2242008 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2242008 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2242008' 00:30:11.630 killing process with pid 2242008 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2242008 00:30:11.630 11:38:07 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2242008 00:30:13.530 11:38:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:13.530 11:38:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:13.530 11:38:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:13.530 11:38:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:13.530 11:38:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:13.530 11:38:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.530 11:38:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:13.530 11:38:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.433 11:38:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:15.433 00:30:15.433 real 0m18.951s 00:30:15.433 user 0m27.238s 00:30:15.433 sys 0m3.027s 00:30:15.433 11:38:10 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.433 11:38:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.433 ************************************ 00:30:15.433 END TEST nvmf_identify_passthru 00:30:15.433 ************************************ 00:30:15.433 11:38:10 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:15.433 11:38:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:15.433 11:38:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.433 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:30:15.433 ************************************ 00:30:15.433 START TEST nvmf_dif 00:30:15.433 ************************************ 00:30:15.433 11:38:11 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:15.433 * Looking for test storage... 00:30:15.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.433 11:38:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.433 11:38:11 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.691 11:38:11 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.691 11:38:11 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.691 11:38:11 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.691 11:38:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.691 11:38:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.691 11:38:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.691 11:38:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:15.691 11:38:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.691 11:38:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:15.691 11:38:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:15.691 11:38:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:15.691 11:38:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:15.691 11:38:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.691 11:38:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.692 11:38:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:15.692 11:38:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.692 11:38:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:15.692 11:38:11 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:15.692 11:38:11 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:15.692 11:38:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.224 11:38:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:18.225 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:18.225 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:18.225 Found net devices under 0000:84:00.0: cvl_0_0 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:18.225 Found net devices under 0000:84:00.1: cvl_0_1 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:30:18.225 00:30:18.225 --- 10.0.0.2 ping statistics --- 00:30:18.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.225 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:30:18.225 00:30:18.225 --- 10.0.0.1 ping statistics --- 00:30:18.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.225 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:18.225 11:38:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:19.601 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:19.601 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:19.601 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:19.601 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:19.601 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:19.601 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:19.601 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:19.601 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:19.601 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:19.601 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:19.601 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:19.601 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:19.601 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:19.601 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:19.601 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:19.601 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:19.601 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:19.860 11:38:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:19.860 11:38:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2245297 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:19.860 11:38:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2245297 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2245297 ']' 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.860 11:38:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.118 [2024-07-26 11:38:15.532670] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:30:20.118 [2024-07-26 11:38:15.532788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.118 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.118 [2024-07-26 11:38:15.615139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.118 [2024-07-26 11:38:15.735403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.118 [2024-07-26 11:38:15.735477] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.118 [2024-07-26 11:38:15.735495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.118 [2024-07-26 11:38:15.735508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.118 [2024-07-26 11:38:15.735519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.118 [2024-07-26 11:38:15.735551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:20.375 11:38:15 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 11:38:15 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.375 11:38:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:20.375 11:38:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 [2024-07-26 11:38:15.891743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.375 11:38:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 ************************************ 00:30:20.375 START TEST fio_dif_1_default 00:30:20.375 ************************************ 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 bdev_null0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:20.375 [2024-07-26 11:38:15.948025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.375 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.375 { 00:30:20.375 "params": { 00:30:20.376 "name": "Nvme$subsystem", 00:30:20.376 "trtype": "$TEST_TRANSPORT", 00:30:20.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.376 "adrfam": "ipv4", 00:30:20.376 "trsvcid": "$NVMF_PORT", 00:30:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.376 "hdgst": ${hdgst:-false}, 00:30:20.376 "ddgst": ${ddgst:-false} 00:30:20.376 }, 00:30:20.376 "method": "bdev_nvme_attach_controller" 00:30:20.376 } 00:30:20.376 EOF 00:30:20.376 )") 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:20.376 "params": { 00:30:20.376 "name": "Nvme0", 00:30:20.376 "trtype": "tcp", 00:30:20.376 "traddr": "10.0.0.2", 00:30:20.376 "adrfam": "ipv4", 00:30:20.376 "trsvcid": "4420", 00:30:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.376 "hdgst": false, 00:30:20.376 "ddgst": false 00:30:20.376 }, 00:30:20.376 "method": "bdev_nvme_attach_controller" 00:30:20.376 }' 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:20.376 11:38:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:20.633 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:20.633 fio-3.35 00:30:20.633 Starting 1 thread 00:30:20.633 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.835 00:30:32.835 filename0: (groupid=0, jobs=1): err= 0: pid=2245527: Fri Jul 26 11:38:26 2024 00:30:32.835 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10024msec) 00:30:32.835 slat (nsec): min=6178, max=61291, avg=10765.06, stdev=4600.48 00:30:32.835 clat (usec): min=40866, max=46702, avg=41732.63, stdev=549.02 00:30:32.835 lat (usec): min=40888, max=46727, avg=41743.39, stdev=549.17 00:30:32.835 clat percentiles (usec): 00:30:32.835 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:32.835 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:30:32.835 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:32.835 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:30:32.835 | 99.99th=[46924] 00:30:32.835 bw ( KiB/s): min= 352, max= 384, per=99.72%, avg=382.40, stdev= 7.16, samples=20 00:30:32.835 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:30:32.835 lat (msec) : 50=100.00% 00:30:32.835 cpu : usr=89.87%, sys=9.84%, ctx=16, majf=0, minf=265 00:30:32.835 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.835 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.835 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:32.835 00:30:32.835 Run status group 0 (all jobs): 00:30:32.835 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10024-10024msec 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.835 00:30:32.835 real 0m11.320s 00:30:32.835 user 0m10.235s 00:30:32.835 sys 0m1.311s 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:32.835 ************************************ 00:30:32.835 END TEST fio_dif_1_default 00:30:32.835 ************************************ 00:30:32.835 11:38:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:32.835 11:38:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:32.835 11:38:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:32.835 11:38:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:32.835 ************************************ 00:30:32.835 START TEST fio_dif_1_multi_subsystems 00:30:32.835 ************************************ 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:32.835 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 bdev_null0 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 [2024-07-26 11:38:27.312170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 bdev_null1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.836 { 00:30:32.836 "params": { 00:30:32.836 "name": "Nvme$subsystem", 00:30:32.836 "trtype": "$TEST_TRANSPORT", 00:30:32.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.836 "adrfam": "ipv4", 00:30:32.836 "trsvcid": "$NVMF_PORT", 00:30:32.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.836 "hdgst": ${hdgst:-false}, 00:30:32.836 "ddgst": ${ddgst:-false} 00:30:32.836 }, 00:30:32.836 "method": "bdev_nvme_attach_controller" 00:30:32.836 } 00:30:32.836 EOF 00:30:32.836 )") 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.836 { 00:30:32.836 "params": { 00:30:32.836 "name": "Nvme$subsystem", 00:30:32.836 "trtype": "$TEST_TRANSPORT", 00:30:32.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.836 "adrfam": "ipv4", 00:30:32.836 "trsvcid": "$NVMF_PORT", 00:30:32.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.836 "hdgst": ${hdgst:-false}, 00:30:32.836 "ddgst": ${ddgst:-false} 00:30:32.836 }, 00:30:32.836 "method": "bdev_nvme_attach_controller" 00:30:32.836 } 00:30:32.836 EOF 00:30:32.836 )") 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:32.836 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:32.837 "params": { 00:30:32.837 "name": "Nvme0", 00:30:32.837 "trtype": "tcp", 00:30:32.837 "traddr": "10.0.0.2", 00:30:32.837 "adrfam": "ipv4", 00:30:32.837 "trsvcid": "4420", 00:30:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:32.837 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:32.837 "hdgst": false, 00:30:32.837 "ddgst": false 00:30:32.837 }, 00:30:32.837 "method": "bdev_nvme_attach_controller" 00:30:32.837 },{ 00:30:32.837 "params": { 00:30:32.837 "name": "Nvme1", 00:30:32.837 "trtype": "tcp", 00:30:32.837 "traddr": "10.0.0.2", 00:30:32.837 "adrfam": "ipv4", 00:30:32.837 "trsvcid": "4420", 00:30:32.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:32.837 "hdgst": false, 00:30:32.837 "ddgst": false 00:30:32.837 }, 00:30:32.837 "method": "bdev_nvme_attach_controller" 00:30:32.837 }' 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:32.837 11:38:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.837 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:32.837 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:32.837 fio-3.35 00:30:32.837 Starting 2 threads 00:30:32.837 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.802 00:30:42.802 filename0: (groupid=0, jobs=1): err= 0: pid=2246933: Fri Jul 26 11:38:38 2024 00:30:42.802 read: IOPS=187, BW=749KiB/s (767kB/s)(7488KiB/10001msec) 00:30:42.802 slat (nsec): min=6416, max=73513, avg=10276.98, stdev=4149.21 00:30:42.802 clat (usec): min=716, max=45326, avg=21335.39, stdev=20538.36 00:30:42.802 lat (usec): min=725, max=45372, avg=21345.67, stdev=20538.36 00:30:42.802 clat percentiles (usec): 00:30:42.802 | 1.00th=[ 742], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:30:42.802 | 30.00th=[ 799], 40.00th=[ 832], 50.00th=[ 1188], 60.00th=[41157], 00:30:42.802 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:42.802 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:30:42.802 | 99.99th=[45351] 00:30:42.802 bw ( KiB/s): min= 704, max= 768, per=57.32%, avg=747.79, stdev=26.58, samples=19 00:30:42.802 iops : min= 176, max= 192, avg=186.95, stdev= 6.65, samples=19 00:30:42.802 lat (usec) : 750=2.08%, 1000=47.49% 00:30:42.802 lat (msec) : 2=0.43%, 50=50.00% 00:30:42.802 cpu : usr=94.42%, sys=5.27%, ctx=17, majf=0, minf=99 00:30:42.802 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.802 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.802 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:42.802 filename1: (groupid=0, jobs=1): err= 0: pid=2246934: Fri Jul 26 11:38:38 2024 00:30:42.802 read: IOPS=138, BW=555KiB/s (568kB/s)(5552KiB/10006msec) 00:30:42.802 slat (nsec): min=5165, max=61286, avg=10891.12, stdev=4727.45 00:30:42.802 clat (usec): min=730, max=45226, avg=28800.09, stdev=19035.87 00:30:42.802 lat (usec): min=738, max=45256, avg=28810.98, stdev=19035.88 00:30:42.802 clat percentiles (usec): 00:30:42.802 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 840], 20.00th=[ 865], 00:30:42.802 | 30.00th=[ 1090], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:30:42.802 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:42.802 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:30:42.802 | 99.99th=[45351] 00:30:42.802 bw ( KiB/s): min= 384, max= 768, per=42.43%, avg=553.60, stdev=175.92, samples=20 00:30:42.802 iops : min= 96, max= 192, avg=138.40, stdev=43.98, samples=20 00:30:42.802 lat (usec) : 750=0.72%, 1000=26.73% 00:30:42.802 lat (msec) : 2=4.25%, 50=68.30% 00:30:42.802 cpu : usr=93.88%, sys=5.66%, ctx=23, majf=0, minf=193 00:30:42.802 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.802 issued rwts: total=1388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.802 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:42.802 00:30:42.802 Run status group 0 (all jobs): 00:30:42.802 READ: bw=1303KiB/s (1334kB/s), 555KiB/s-749KiB/s (568kB/s-767kB/s), io=12.7MiB (13.4MB), run=10001-10006msec 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 00:30:43.061 real 0m11.341s 00:30:43.061 user 0m20.281s 00:30:43.061 sys 0m1.390s 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 ************************************ 00:30:43.061 END TEST fio_dif_1_multi_subsystems 00:30:43.061 ************************************ 00:30:43.061 11:38:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:43.061 11:38:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:43.061 11:38:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 ************************************ 00:30:43.061 START TEST fio_dif_rand_params 00:30:43.061 ************************************ 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 bdev_null0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.061 [2024-07-26 11:38:38.699012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.061 { 00:30:43.061 "params": { 00:30:43.061 "name": "Nvme$subsystem", 00:30:43.061 "trtype": "$TEST_TRANSPORT", 00:30:43.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.061 "adrfam": "ipv4", 00:30:43.061 "trsvcid": "$NVMF_PORT", 00:30:43.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.061 "hdgst": ${hdgst:-false}, 00:30:43.061 "ddgst": ${ddgst:-false} 00:30:43.061 }, 00:30:43.061 "method": "bdev_nvme_attach_controller" 00:30:43.061 } 00:30:43.061 EOF 00:30:43.061 )") 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:43.061 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:43.062 11:38:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.062 "params": { 00:30:43.062 "name": "Nvme0", 00:30:43.062 "trtype": "tcp", 00:30:43.062 "traddr": "10.0.0.2", 00:30:43.062 "adrfam": "ipv4", 00:30:43.062 "trsvcid": "4420", 00:30:43.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.062 "hdgst": false, 00:30:43.062 "ddgst": false 00:30:43.062 }, 00:30:43.062 "method": "bdev_nvme_attach_controller" 00:30:43.062 }' 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.320 11:38:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.320 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:43.320 ... 00:30:43.320 fio-3.35 00:30:43.320 Starting 3 threads 00:30:43.589 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.152 00:30:50.152 filename0: (groupid=0, jobs=1): err= 0: pid=2248330: Fri Jul 26 11:38:44 2024 00:30:50.152 read: IOPS=173, BW=21.7MiB/s (22.7MB/s)(109MiB/5048msec) 00:30:50.152 slat (nsec): min=4978, max=34989, avg=14289.64, stdev=2021.99 00:30:50.152 clat (usec): min=5197, max=92467, avg=17241.71, stdev=15651.76 00:30:50.152 lat (usec): min=5211, max=92481, avg=17256.00, stdev=15651.56 00:30:50.152 clat percentiles (usec): 00:30:50.152 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 8160], 00:30:50.152 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11207], 60.00th=[12649], 00:30:50.152 | 70.00th=[14222], 80.00th=[15664], 90.00th=[51119], 95.00th=[54264], 00:30:50.152 | 99.00th=[56886], 99.50th=[56886], 99.90th=[92799], 99.95th=[92799], 00:30:50.152 | 99.99th=[92799] 00:30:50.152 bw ( KiB/s): min=16896, max=39168, per=31.98%, avg=22323.80, stdev=6417.10, samples=10 00:30:50.152 iops : min= 132, max= 306, avg=174.30, stdev=50.20, samples=10 00:30:50.152 lat (msec) : 10=36.23%, 20=48.23%, 50=2.51%, 100=13.03% 00:30:50.152 cpu : usr=92.63%, sys=6.82%, ctx=8, majf=0, minf=118 00:30:50.152 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.152 issued rwts: total=875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.152 filename0: (groupid=0, jobs=1): err= 0: pid=2248331: Fri Jul 26 11:38:44 2024 00:30:50.152 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(110MiB/5008msec) 00:30:50.152 slat (nsec): min=4896, max=31413, avg=14185.54, stdev=2094.10 00:30:50.152 clat (usec): min=5573, max=56458, avg=17067.82, stdev=15558.51 00:30:50.152 lat (usec): min=5587, max=56477, avg=17082.00, stdev=15558.70 00:30:50.152 clat percentiles (usec): 00:30:50.152 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 8455], 00:30:50.152 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[12256], 00:30:50.152 | 70.00th=[13304], 80.00th=[14484], 90.00th=[50594], 95.00th=[53216], 00:30:50.152 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:30:50.152 | 99.99th=[56361] 00:30:50.152 bw ( KiB/s): min=14592, max=28160, per=32.11%, avg=22416.20, stdev=3666.42, samples=10 00:30:50.152 iops : min= 114, max= 220, avg=175.00, stdev=28.62, samples=10 00:30:50.152 lat (msec) : 10=44.71%, 20=38.91%, 50=3.64%, 100=12.74% 00:30:50.152 cpu : usr=92.87%, sys=6.63%, ctx=6, majf=0, minf=82 00:30:50.152 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.152 issued rwts: total=879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.153 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.153 filename0: (groupid=0, jobs=1): err= 0: pid=2248332: Fri Jul 26 11:38:44 2024 00:30:50.153 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(125MiB/5008msec) 00:30:50.153 slat (nsec): min=4899, max=34168, avg=14307.91, stdev=2283.97 00:30:50.153 clat (usec): min=5106, max=91993, avg=15015.95, stdev=13447.51 00:30:50.153 lat (usec): min=5120, max=92009, avg=15030.26, stdev=13447.54 00:30:50.153 clat percentiles (usec): 00:30:50.153 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 7767], 00:30:50.153 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10552], 60.00th=[11338], 00:30:50.153 | 70.00th=[12911], 80.00th=[14615], 90.00th=[47973], 95.00th=[51119], 00:30:50.153 | 99.00th=[54789], 99.50th=[55837], 99.90th=[91751], 99.95th=[91751], 00:30:50.153 | 99.99th=[91751] 00:30:50.153 bw ( KiB/s): min=16929, max=36096, per=36.52%, avg=25491.80, stdev=5836.04, samples=10 00:30:50.153 iops : min= 132, max= 282, avg=199.00, stdev=45.70, samples=10 00:30:50.153 lat (msec) : 10=44.14%, 20=44.54%, 50=4.30%, 100=7.01% 00:30:50.153 cpu : usr=92.27%, sys=7.27%, ctx=9, majf=0, minf=165 00:30:50.153 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.153 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.153 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.153 00:30:50.153 Run status group 0 (all jobs): 00:30:50.153 READ: bw=68.2MiB/s (71.5MB/s), 21.7MiB/s-24.9MiB/s (22.7MB/s-26.1MB/s), io=344MiB (361MB), run=5008-5048msec 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 bdev_null0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 [2024-07-26 11:38:44.855540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 bdev_null1 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 bdev_null2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.153 { 00:30:50.153 "params": { 00:30:50.153 "name": "Nvme$subsystem", 00:30:50.153 "trtype": "$TEST_TRANSPORT", 00:30:50.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.153 "adrfam": "ipv4", 00:30:50.153 "trsvcid": "$NVMF_PORT", 00:30:50.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.153 "hdgst": ${hdgst:-false}, 00:30:50.153 "ddgst": ${ddgst:-false} 00:30:50.153 }, 00:30:50.153 "method": "bdev_nvme_attach_controller" 00:30:50.153 } 00:30:50.153 EOF 00:30:50.153 )") 00:30:50.153 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.154 { 00:30:50.154 "params": { 00:30:50.154 "name": "Nvme$subsystem", 00:30:50.154 "trtype": "$TEST_TRANSPORT", 00:30:50.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.154 "adrfam": "ipv4", 00:30:50.154 "trsvcid": "$NVMF_PORT", 00:30:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.154 "hdgst": ${hdgst:-false}, 00:30:50.154 "ddgst": ${ddgst:-false} 00:30:50.154 }, 00:30:50.154 "method": "bdev_nvme_attach_controller" 00:30:50.154 } 00:30:50.154 EOF 00:30:50.154 )") 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:50.154 { 00:30:50.154 "params": { 00:30:50.154 "name": "Nvme$subsystem", 00:30:50.154 "trtype": "$TEST_TRANSPORT", 00:30:50.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.154 "adrfam": "ipv4", 00:30:50.154 "trsvcid": "$NVMF_PORT", 00:30:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.154 "hdgst": ${hdgst:-false}, 00:30:50.154 "ddgst": ${ddgst:-false} 00:30:50.154 }, 00:30:50.154 "method": "bdev_nvme_attach_controller" 00:30:50.154 } 00:30:50.154 EOF 00:30:50.154 )") 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:50.154 "params": { 00:30:50.154 "name": "Nvme0", 00:30:50.154 "trtype": "tcp", 00:30:50.154 "traddr": "10.0.0.2", 00:30:50.154 "adrfam": "ipv4", 00:30:50.154 "trsvcid": "4420", 00:30:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:50.154 "hdgst": false, 00:30:50.154 "ddgst": false 00:30:50.154 }, 00:30:50.154 "method": "bdev_nvme_attach_controller" 00:30:50.154 },{ 00:30:50.154 "params": { 00:30:50.154 "name": "Nvme1", 00:30:50.154 "trtype": "tcp", 00:30:50.154 "traddr": "10.0.0.2", 00:30:50.154 "adrfam": "ipv4", 00:30:50.154 "trsvcid": "4420", 00:30:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.154 "hdgst": false, 00:30:50.154 "ddgst": false 00:30:50.154 }, 00:30:50.154 "method": "bdev_nvme_attach_controller" 00:30:50.154 },{ 00:30:50.154 "params": { 00:30:50.154 "name": "Nvme2", 00:30:50.154 "trtype": "tcp", 00:30:50.154 "traddr": "10.0.0.2", 00:30:50.154 "adrfam": "ipv4", 00:30:50.154 "trsvcid": "4420", 00:30:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:50.154 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:50.154 "hdgst": false, 00:30:50.154 "ddgst": false 00:30:50.154 }, 00:30:50.154 "method": "bdev_nvme_attach_controller" 00:30:50.154 }' 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:50.154 11:38:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.154 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:50.154 ... 00:30:50.154 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:50.154 ... 00:30:50.154 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:50.154 ... 00:30:50.154 fio-3.35 00:30:50.154 Starting 24 threads 00:30:50.154 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.362 00:31:02.362 filename0: (groupid=0, jobs=1): err= 0: pid=2249188: Fri Jul 26 11:38:56 2024 00:31:02.362 read: IOPS=394, BW=1579KiB/s (1617kB/s)(15.4MiB/10009msec) 00:31:02.362 slat (usec): min=8, max=107, avg=19.72, stdev= 8.53 00:31:02.362 clat (msec): min=28, max=184, avg=40.35, stdev=20.84 00:31:02.362 lat (msec): min=28, max=184, avg=40.37, stdev=20.84 00:31:02.362 clat percentiles (msec): 00:31:02.362 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:31:02.362 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.362 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 38], 00:31:02.362 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 186], 00:31:02.362 | 99.99th=[ 186] 00:31:02.362 bw ( KiB/s): min= 512, max= 1792, per=4.23%, avg=1574.40, stdev=427.77, samples=20 00:31:02.362 iops : min= 128, max= 448, avg=393.60, stdev=106.94, samples=20 00:31:02.362 lat (msec) : 50=95.95%, 100=1.21%, 250=2.83% 00:31:02.362 cpu : usr=91.38%, sys=4.54%, ctx=254, majf=0, minf=94 00:31:02.362 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 issued rwts: total=3952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.362 filename0: (groupid=0, jobs=1): err= 0: pid=2249189: Fri Jul 26 11:38:56 2024 00:31:02.362 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10008msec) 00:31:02.362 slat (usec): min=8, max=121, avg=38.81, stdev=30.64 00:31:02.362 clat (msec): min=34, max=237, avg=40.84, stdev=28.45 00:31:02.362 lat (msec): min=34, max=237, avg=40.88, stdev=28.46 00:31:02.362 clat percentiles (msec): 00:31:02.362 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.362 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.362 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.362 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 239], 99.95th=[ 239], 00:31:02.362 | 99.99th=[ 239] 00:31:02.362 bw ( KiB/s): min= 256, max= 1792, per=4.14%, avg=1542.74, stdev=496.61, samples=19 00:31:02.362 iops : min= 64, max= 448, avg=385.68, stdev=124.15, samples=19 00:31:02.362 lat (msec) : 50=97.53%, 250=2.47% 00:31:02.362 cpu : usr=97.37%, sys=1.86%, ctx=43, majf=0, minf=39 00:31:02.362 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:02.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.362 filename0: (groupid=0, jobs=1): err= 0: pid=2249190: Fri Jul 26 11:38:56 2024 00:31:02.362 read: IOPS=389, BW=1559KiB/s (1597kB/s)(15.2MiB/10015msec) 00:31:02.362 slat (nsec): min=8152, max=94643, avg=23318.45, stdev=8616.69 00:31:02.362 clat (msec): min=19, max=275, avg=40.82, stdev=26.86 00:31:02.362 lat (msec): min=19, max=275, avg=40.84, stdev=26.86 00:31:02.362 clat percentiles (msec): 00:31:02.362 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:31:02.362 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.362 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.362 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 247], 99.95th=[ 275], 00:31:02.362 | 99.99th=[ 275] 00:31:02.362 bw ( KiB/s): min= 256, max= 1795, per=4.18%, avg=1555.95, stdev=480.44, samples=20 00:31:02.362 iops : min= 64, max= 448, avg=388.80, stdev=120.03, samples=20 00:31:02.362 lat (msec) : 20=0.05%, 50=97.08%, 250=2.82%, 500=0.05% 00:31:02.362 cpu : usr=95.18%, sys=2.80%, ctx=116, majf=0, minf=59 00:31:02.362 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 issued rwts: total=3904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.362 filename0: (groupid=0, jobs=1): err= 0: pid=2249191: Fri Jul 26 11:38:56 2024 00:31:02.362 read: IOPS=386, BW=1546KiB/s (1583kB/s)(15.2MiB/10048msec) 00:31:02.362 slat (usec): min=9, max=133, avg=56.86, stdev=27.93 00:31:02.362 clat (msec): min=15, max=419, avg=40.91, stdev=33.93 00:31:02.362 lat (msec): min=15, max=419, avg=40.96, stdev=33.93 00:31:02.362 clat percentiles (msec): 00:31:02.362 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.362 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.362 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.362 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 422], 99.95th=[ 422], 00:31:02.362 | 99.99th=[ 422] 00:31:02.362 bw ( KiB/s): min= 128, max= 1792, per=4.12%, avg=1534.32, stdev=508.76, samples=19 00:31:02.362 iops : min= 32, max= 448, avg=383.58, stdev=127.19, samples=19 00:31:02.362 lat (msec) : 20=0.10%, 50=97.48%, 100=0.36%, 250=1.60%, 500=0.46% 00:31:02.362 cpu : usr=98.29%, sys=1.26%, ctx=46, majf=0, minf=49 00:31:02.362 IO depths : 1=0.1%, 2=4.3%, 4=17.1%, 8=64.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:02.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 complete : 0=0.0%, 4=92.8%, 8=3.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.362 issued rwts: total=3884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.362 filename0: (groupid=0, jobs=1): err= 0: pid=2249192: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10008msec) 00:31:02.363 slat (usec): min=10, max=134, avg=76.95, stdev=23.26 00:31:02.363 clat (msec): min=26, max=273, avg=40.50, stdev=28.75 00:31:02.363 lat (msec): min=26, max=273, avg=40.58, stdev=28.75 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.363 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 236], 99.50th=[ 236], 99.90th=[ 239], 99.95th=[ 275], 00:31:02.363 | 99.99th=[ 275] 00:31:02.363 bw ( KiB/s): min= 240, max= 1795, per=4.16%, avg=1549.55, stdev=496.85, samples=20 00:31:02.363 iops : min= 60, max= 448, avg=387.20, stdev=124.12, samples=20 00:31:02.363 lat (msec) : 50=97.53%, 250=2.42%, 500=0.05% 00:31:02.363 cpu : usr=98.31%, sys=1.23%, ctx=14, majf=0, minf=37 00:31:02.363 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename0: (groupid=0, jobs=1): err= 0: pid=2249193: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10008msec) 00:31:02.363 slat (nsec): min=9398, max=92222, avg=36211.46, stdev=11990.75 00:31:02.363 clat (msec): min=20, max=420, avg=40.86, stdev=33.84 00:31:02.363 lat (msec): min=20, max=420, avg=40.90, stdev=33.84 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.363 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 422], 99.95th=[ 422], 00:31:02.363 | 99.99th=[ 422] 00:31:02.363 bw ( KiB/s): min= 128, max= 1792, per=4.13%, avg=1536.00, stdev=515.54, samples=19 00:31:02.363 iops : min= 32, max= 448, avg=384.00, stdev=128.89, samples=19 00:31:02.363 lat (msec) : 50=97.94%, 250=1.59%, 500=0.46% 00:31:02.363 cpu : usr=98.24%, sys=1.21%, ctx=78, majf=0, minf=40 00:31:02.363 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename0: (groupid=0, jobs=1): err= 0: pid=2249194: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1553KiB/s (1591kB/s)(15.2MiB/10012msec) 00:31:02.363 slat (usec): min=8, max=123, avg=43.45, stdev=29.50 00:31:02.363 clat (msec): min=27, max=425, avg=40.81, stdev=32.80 00:31:02.363 lat (msec): min=27, max=425, avg=40.85, stdev=32.80 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.363 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 376], 99.95th=[ 426], 00:31:02.363 | 99.99th=[ 426] 00:31:02.363 bw ( KiB/s): min= 128, max= 1792, per=4.16%, avg=1548.80, stdev=505.05, samples=20 00:31:02.363 iops : min= 32, max= 448, avg=387.20, stdev=126.26, samples=20 00:31:02.363 lat (msec) : 50=97.94%, 250=1.59%, 500=0.46% 00:31:02.363 cpu : usr=96.57%, sys=2.07%, ctx=117, majf=0, minf=51 00:31:02.363 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename0: (groupid=0, jobs=1): err= 0: pid=2249195: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10009msec) 00:31:02.363 slat (usec): min=4, max=132, avg=25.88, stdev=15.13 00:31:02.363 clat (msec): min=27, max=370, avg=40.94, stdev=32.49 00:31:02.363 lat (msec): min=27, max=371, avg=40.97, stdev=32.49 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:31:02.363 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 372], 99.95th=[ 372], 00:31:02.363 | 99.99th=[ 372] 00:31:02.363 bw ( KiB/s): min= 128, max= 1792, per=4.14%, avg=1542.74, stdev=518.14, samples=19 00:31:02.363 iops : min= 32, max= 448, avg=385.68, stdev=129.53, samples=19 00:31:02.363 lat (msec) : 50=97.94%, 250=1.59%, 500=0.46% 00:31:02.363 cpu : usr=95.20%, sys=2.62%, ctx=368, majf=0, minf=37 00:31:02.363 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename1: (groupid=0, jobs=1): err= 0: pid=2249196: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10010msec) 00:31:02.363 slat (usec): min=14, max=113, avg=48.83, stdev=22.75 00:31:02.363 clat (msec): min=20, max=487, avg=40.76, stdev=34.22 00:31:02.363 lat (msec): min=20, max=487, avg=40.81, stdev=34.22 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.363 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 422], 99.95th=[ 489], 00:31:02.363 | 99.99th=[ 489] 00:31:02.363 bw ( KiB/s): min= 128, max= 1792, per=4.13%, avg=1536.00, stdev=515.54, samples=19 00:31:02.363 iops : min= 32, max= 448, avg=384.00, stdev=128.89, samples=19 00:31:02.363 lat (msec) : 50=97.94%, 250=1.65%, 500=0.41% 00:31:02.363 cpu : usr=97.40%, sys=2.08%, ctx=15, majf=0, minf=36 00:31:02.363 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename1: (groupid=0, jobs=1): err= 0: pid=2249197: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10009msec) 00:31:02.363 slat (usec): min=9, max=122, avg=48.94, stdev=24.17 00:31:02.363 clat (msec): min=20, max=421, avg=40.74, stdev=33.84 00:31:02.363 lat (msec): min=20, max=421, avg=40.79, stdev=33.84 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.363 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 422], 99.95th=[ 422], 00:31:02.363 | 99.99th=[ 422] 00:31:02.363 bw ( KiB/s): min= 128, max= 1792, per=4.13%, avg=1536.00, stdev=515.54, samples=19 00:31:02.363 iops : min= 32, max= 448, avg=384.00, stdev=128.89, samples=19 00:31:02.363 lat (msec) : 50=97.94%, 250=1.65%, 500=0.41% 00:31:02.363 cpu : usr=97.97%, sys=1.60%, ctx=18, majf=0, minf=33 00:31:02.363 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename1: (groupid=0, jobs=1): err= 0: pid=2249198: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=391, BW=1564KiB/s (1602kB/s)(15.3MiB/10024msec) 00:31:02.363 slat (usec): min=4, max=126, avg=68.25, stdev=28.32 00:31:02.363 clat (msec): min=19, max=350, avg=40.31, stdev=27.22 00:31:02.363 lat (msec): min=19, max=350, avg=40.38, stdev=27.22 00:31:02.363 clat percentiles (msec): 00:31:02.363 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.363 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:31:02.363 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.363 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 342], 99.95th=[ 351], 00:31:02.363 | 99.99th=[ 351] 00:31:02.363 bw ( KiB/s): min= 368, max= 1792, per=4.19%, avg=1561.55, stdev=462.56, samples=20 00:31:02.363 iops : min= 92, max= 448, avg=390.35, stdev=115.74, samples=20 00:31:02.363 lat (msec) : 20=0.23%, 50=96.56%, 100=0.77%, 250=2.35%, 500=0.10% 00:31:02.363 cpu : usr=98.09%, sys=1.46%, ctx=12, majf=0, minf=56 00:31:02.363 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.363 issued rwts: total=3920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.363 filename1: (groupid=0, jobs=1): err= 0: pid=2249199: Fri Jul 26 11:38:56 2024 00:31:02.363 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10009msec) 00:31:02.363 slat (nsec): min=12127, max=90755, avg=35191.02, stdev=9998.46 00:31:02.363 clat (msec): min=27, max=303, avg=40.87, stdev=28.81 00:31:02.364 lat (msec): min=27, max=303, avg=40.90, stdev=28.81 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.364 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 296], 99.95th=[ 305], 00:31:02.364 | 99.99th=[ 305] 00:31:02.364 bw ( KiB/s): min= 256, max= 1795, per=4.16%, avg=1549.55, stdev=484.52, samples=20 00:31:02.364 iops : min= 64, max= 448, avg=387.20, stdev=121.03, samples=20 00:31:02.364 lat (msec) : 50=97.53%, 250=2.37%, 500=0.10% 00:31:02.364 cpu : usr=96.13%, sys=2.30%, ctx=86, majf=0, minf=50 00:31:02.364 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename1: (groupid=0, jobs=1): err= 0: pid=2249200: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10007msec) 00:31:02.364 slat (usec): min=10, max=117, avg=36.90, stdev=15.96 00:31:02.364 clat (msec): min=20, max=419, avg=40.83, stdev=33.77 00:31:02.364 lat (msec): min=20, max=419, avg=40.87, stdev=33.77 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.364 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 418], 99.95th=[ 418], 00:31:02.364 | 99.99th=[ 418] 00:31:02.364 bw ( KiB/s): min= 128, max= 1792, per=4.13%, avg=1536.00, stdev=515.54, samples=19 00:31:02.364 iops : min= 32, max= 448, avg=384.00, stdev=128.89, samples=19 00:31:02.364 lat (msec) : 50=97.94%, 250=1.65%, 500=0.41% 00:31:02.364 cpu : usr=97.09%, sys=2.07%, ctx=63, majf=0, minf=50 00:31:02.364 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename1: (groupid=0, jobs=1): err= 0: pid=2249201: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=388, BW=1553KiB/s (1591kB/s)(15.2MiB/10011msec) 00:31:02.364 slat (usec): min=9, max=122, avg=72.18, stdev=23.15 00:31:02.364 clat (msec): min=27, max=372, avg=40.55, stdev=32.64 00:31:02.364 lat (msec): min=27, max=372, avg=40.63, stdev=32.63 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.364 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 372], 99.95th=[ 372], 00:31:02.364 | 99.99th=[ 372] 00:31:02.364 bw ( KiB/s): min= 128, max= 1792, per=4.16%, avg=1548.80, stdev=505.05, samples=20 00:31:02.364 iops : min= 32, max= 448, avg=387.20, stdev=126.26, samples=20 00:31:02.364 lat (msec) : 50=97.94%, 250=1.59%, 500=0.46% 00:31:02.364 cpu : usr=96.69%, sys=1.90%, ctx=66, majf=0, minf=40 00:31:02.364 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename1: (groupid=0, jobs=1): err= 0: pid=2249202: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10008msec) 00:31:02.364 slat (usec): min=9, max=115, avg=39.59, stdev=22.62 00:31:02.364 clat (msec): min=26, max=301, avg=40.85, stdev=28.73 00:31:02.364 lat (msec): min=27, max=301, avg=40.89, stdev=28.73 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.364 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 236], 99.50th=[ 236], 99.90th=[ 296], 99.95th=[ 300], 00:31:02.364 | 99.99th=[ 300] 00:31:02.364 bw ( KiB/s): min= 240, max= 1795, per=4.16%, avg=1549.55, stdev=492.67, samples=20 00:31:02.364 iops : min= 60, max= 448, avg=387.20, stdev=123.07, samples=20 00:31:02.364 lat (msec) : 50=97.53%, 250=2.37%, 500=0.10% 00:31:02.364 cpu : usr=97.77%, sys=1.81%, ctx=35, majf=0, minf=32 00:31:02.364 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename1: (groupid=0, jobs=1): err= 0: pid=2249203: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=390, BW=1564KiB/s (1601kB/s)(15.3MiB/10026msec) 00:31:02.364 slat (usec): min=4, max=130, avg=55.66, stdev=33.03 00:31:02.364 clat (msec): min=20, max=345, avg=40.44, stdev=27.28 00:31:02.364 lat (msec): min=20, max=345, avg=40.49, stdev=27.28 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.364 | 30.00th=[ 36], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 330], 99.95th=[ 347], 00:31:02.364 | 99.99th=[ 347] 00:31:02.364 bw ( KiB/s): min= 384, max= 1792, per=4.19%, avg=1561.60, stdev=461.70, samples=20 00:31:02.364 iops : min= 96, max= 448, avg=390.40, stdev=115.42, samples=20 00:31:02.364 lat (msec) : 50=96.79%, 100=0.82%, 250=2.24%, 500=0.15% 00:31:02.364 cpu : usr=98.33%, sys=1.25%, ctx=10, majf=0, minf=38 00:31:02.364 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename2: (groupid=0, jobs=1): err= 0: pid=2249204: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=387, BW=1551KiB/s (1588kB/s)(15.2MiB/10027msec) 00:31:02.364 slat (usec): min=9, max=109, avg=32.94, stdev=22.94 00:31:02.364 clat (msec): min=26, max=388, avg=40.97, stdev=33.18 00:31:02.364 lat (msec): min=27, max=388, avg=41.00, stdev=33.18 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:31:02.364 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 388], 99.95th=[ 388], 00:31:02.364 | 99.99th=[ 388] 00:31:02.364 bw ( KiB/s): min= 128, max= 1795, per=4.16%, avg=1549.55, stdev=505.31, samples=20 00:31:02.364 iops : min= 32, max= 448, avg=387.20, stdev=126.26, samples=20 00:31:02.364 lat (msec) : 50=97.94%, 250=1.65%, 500=0.41% 00:31:02.364 cpu : usr=94.03%, sys=3.26%, ctx=146, majf=0, minf=38 00:31:02.364 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename2: (groupid=0, jobs=1): err= 0: pid=2249205: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10007msec) 00:31:02.364 slat (usec): min=9, max=139, avg=74.51, stdev=25.83 00:31:02.364 clat (msec): min=20, max=419, avg=40.52, stdev=33.80 00:31:02.364 lat (msec): min=20, max=419, avg=40.59, stdev=33.80 00:31:02.364 clat percentiles (msec): 00:31:02.364 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.364 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 37], 00:31:02.364 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.364 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 422], 99.95th=[ 422], 00:31:02.364 | 99.99th=[ 422] 00:31:02.364 bw ( KiB/s): min= 128, max= 1792, per=4.13%, avg=1536.00, stdev=515.54, samples=19 00:31:02.364 iops : min= 32, max= 448, avg=384.00, stdev=128.89, samples=19 00:31:02.364 lat (msec) : 50=97.94%, 250=1.65%, 500=0.41% 00:31:02.364 cpu : usr=98.08%, sys=1.42%, ctx=16, majf=0, minf=47 00:31:02.364 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.364 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.364 filename2: (groupid=0, jobs=1): err= 0: pid=2249206: Fri Jul 26 11:38:56 2024 00:31:02.364 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10008msec) 00:31:02.364 slat (usec): min=9, max=122, avg=41.20, stdev=29.16 00:31:02.364 clat (msec): min=34, max=237, avg=40.84, stdev=28.55 00:31:02.364 lat (msec): min=34, max=237, avg=40.88, stdev=28.55 00:31:02.364 clat percentiles (msec): 00:31:02.365 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.365 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.365 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.365 | 99.00th=[ 236], 99.50th=[ 236], 99.90th=[ 239], 99.95th=[ 239], 00:31:02.365 | 99.99th=[ 239] 00:31:02.365 bw ( KiB/s): min= 256, max= 1795, per=4.16%, avg=1549.55, stdev=484.52, samples=20 00:31:02.365 iops : min= 64, max= 448, avg=387.20, stdev=121.03, samples=20 00:31:02.365 lat (msec) : 50=97.53%, 250=2.47% 00:31:02.365 cpu : usr=96.63%, sys=2.33%, ctx=178, majf=0, minf=34 00:31:02.365 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:02.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.365 filename2: (groupid=0, jobs=1): err= 0: pid=2249207: Fri Jul 26 11:38:56 2024 00:31:02.365 read: IOPS=388, BW=1553KiB/s (1591kB/s)(15.2MiB/10012msec) 00:31:02.365 slat (usec): min=6, max=105, avg=26.74, stdev=14.87 00:31:02.365 clat (msec): min=21, max=371, avg=40.99, stdev=32.54 00:31:02.365 lat (msec): min=21, max=372, avg=41.02, stdev=32.54 00:31:02.365 clat percentiles (msec): 00:31:02.365 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:31:02.365 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.365 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.365 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 372], 99.95th=[ 372], 00:31:02.365 | 99.99th=[ 372] 00:31:02.365 bw ( KiB/s): min= 128, max= 1792, per=4.14%, avg=1541.89, stdev=517.34, samples=19 00:31:02.365 iops : min= 32, max= 448, avg=385.47, stdev=129.33, samples=19 00:31:02.365 lat (msec) : 50=97.94%, 250=1.59%, 500=0.46% 00:31:02.365 cpu : usr=95.31%, sys=2.58%, ctx=246, majf=0, minf=58 00:31:02.365 IO depths : 1=0.3%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:31:02.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.365 filename2: (groupid=0, jobs=1): err= 0: pid=2249208: Fri Jul 26 11:38:56 2024 00:31:02.365 read: IOPS=389, BW=1559KiB/s (1597kB/s)(15.2MiB/10016msec) 00:31:02.365 slat (usec): min=8, max=125, avg=35.17, stdev=18.72 00:31:02.365 clat (msec): min=27, max=303, avg=40.76, stdev=27.34 00:31:02.365 lat (msec): min=27, max=303, avg=40.79, stdev=27.34 00:31:02.365 clat percentiles (msec): 00:31:02.365 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:31:02.365 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.365 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.365 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 296], 99.95th=[ 305], 00:31:02.365 | 99.99th=[ 305] 00:31:02.365 bw ( KiB/s): min= 256, max= 1792, per=4.18%, avg=1555.20, stdev=479.43, samples=20 00:31:02.365 iops : min= 64, max= 448, avg=388.80, stdev=119.86, samples=20 00:31:02.365 lat (msec) : 50=97.13%, 100=0.36%, 250=2.41%, 500=0.10% 00:31:02.365 cpu : usr=91.57%, sys=4.18%, ctx=470, majf=0, minf=39 00:31:02.365 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:02.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 issued rwts: total=3904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.365 filename2: (groupid=0, jobs=1): err= 0: pid=2249209: Fri Jul 26 11:38:56 2024 00:31:02.365 read: IOPS=390, BW=1564KiB/s (1601kB/s)(15.3MiB/10028msec) 00:31:02.365 slat (usec): min=4, max=123, avg=38.11, stdev=27.70 00:31:02.365 clat (msec): min=33, max=238, avg=40.59, stdev=26.58 00:31:02.365 lat (msec): min=33, max=238, avg=40.63, stdev=26.58 00:31:02.365 clat percentiles (msec): 00:31:02.365 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.365 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.365 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 38], 00:31:02.365 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 239], 99.95th=[ 239], 00:31:02.365 | 99.99th=[ 239] 00:31:02.365 bw ( KiB/s): min= 384, max= 1792, per=4.19%, avg=1561.60, stdev=461.70, samples=20 00:31:02.365 iops : min= 96, max= 448, avg=390.40, stdev=115.42, samples=20 00:31:02.365 lat (msec) : 50=96.73%, 100=0.82%, 250=2.45% 00:31:02.365 cpu : usr=95.88%, sys=2.42%, ctx=400, majf=0, minf=47 00:31:02.365 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 issued rwts: total=3920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.365 filename2: (groupid=0, jobs=1): err= 0: pid=2249210: Fri Jul 26 11:38:56 2024 00:31:02.365 read: IOPS=388, BW=1554KiB/s (1591kB/s)(15.2MiB/10010msec) 00:31:02.365 slat (nsec): min=5010, max=89705, avg=35382.19, stdev=9513.81 00:31:02.365 clat (msec): min=20, max=422, avg=40.87, stdev=34.00 00:31:02.365 lat (msec): min=20, max=422, avg=40.90, stdev=34.00 00:31:02.365 clat percentiles (msec): 00:31:02.365 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:31:02.365 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.365 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.365 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 422], 99.95th=[ 422], 00:31:02.365 | 99.99th=[ 422] 00:31:02.365 bw ( KiB/s): min= 128, max= 1792, per=4.13%, avg=1536.00, stdev=515.54, samples=19 00:31:02.365 iops : min= 32, max= 448, avg=384.00, stdev=128.89, samples=19 00:31:02.365 lat (msec) : 50=97.94%, 250=1.59%, 500=0.46% 00:31:02.365 cpu : usr=97.43%, sys=1.95%, ctx=115, majf=0, minf=37 00:31:02.365 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.365 filename2: (groupid=0, jobs=1): err= 0: pid=2249211: Fri Jul 26 11:38:56 2024 00:31:02.365 read: IOPS=388, BW=1553KiB/s (1590kB/s)(15.2MiB/10016msec) 00:31:02.365 slat (nsec): min=9048, max=54316, avg=23277.99, stdev=8680.22 00:31:02.365 clat (msec): min=19, max=387, avg=41.01, stdev=32.23 00:31:02.365 lat (msec): min=19, max=387, avg=41.03, stdev=32.22 00:31:02.365 clat percentiles (msec): 00:31:02.365 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 37], 00:31:02.365 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:31:02.365 | 70.00th=[ 37], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:31:02.365 | 99.00th=[ 236], 99.50th=[ 239], 99.90th=[ 388], 99.95th=[ 388], 00:31:02.365 | 99.99th=[ 388] 00:31:02.365 bw ( KiB/s): min= 128, max= 1795, per=4.16%, avg=1549.55, stdev=499.15, samples=20 00:31:02.365 iops : min= 32, max= 448, avg=387.20, stdev=124.71, samples=20 00:31:02.365 lat (msec) : 20=0.05%, 50=97.48%, 250=2.01%, 500=0.46% 00:31:02.365 cpu : usr=97.70%, sys=1.88%, ctx=22, majf=0, minf=48 00:31:02.365 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:02.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:02.365 issued rwts: total=3888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:02.365 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:02.365 00:31:02.365 Run status group 0 (all jobs): 00:31:02.365 READ: bw=36.3MiB/s (38.1MB/s), 1546KiB/s-1579KiB/s (1583kB/s-1617kB/s), io=365MiB (383MB), run=10007-10048msec 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.365 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 bdev_null0 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 [2024-07-26 11:38:56.672985] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 bdev_null1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:02.366 { 00:31:02.366 "params": { 00:31:02.366 "name": "Nvme$subsystem", 00:31:02.366 "trtype": "$TEST_TRANSPORT", 00:31:02.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.366 "adrfam": "ipv4", 00:31:02.366 "trsvcid": "$NVMF_PORT", 00:31:02.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.366 "hdgst": ${hdgst:-false}, 00:31:02.366 "ddgst": ${ddgst:-false} 00:31:02.366 }, 00:31:02.366 "method": "bdev_nvme_attach_controller" 00:31:02.366 } 00:31:02.366 EOF 00:31:02.366 )") 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:02.366 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:02.366 { 00:31:02.366 "params": { 00:31:02.366 "name": "Nvme$subsystem", 00:31:02.366 "trtype": "$TEST_TRANSPORT", 00:31:02.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:02.366 "adrfam": "ipv4", 00:31:02.366 "trsvcid": "$NVMF_PORT", 00:31:02.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:02.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:02.367 "hdgst": ${hdgst:-false}, 00:31:02.367 "ddgst": ${ddgst:-false} 00:31:02.367 }, 00:31:02.367 "method": "bdev_nvme_attach_controller" 00:31:02.367 } 00:31:02.367 EOF 00:31:02.367 )") 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:02.367 "params": { 00:31:02.367 "name": "Nvme0", 00:31:02.367 "trtype": "tcp", 00:31:02.367 "traddr": "10.0.0.2", 00:31:02.367 "adrfam": "ipv4", 00:31:02.367 "trsvcid": "4420", 00:31:02.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.367 "hdgst": false, 00:31:02.367 "ddgst": false 00:31:02.367 }, 00:31:02.367 "method": "bdev_nvme_attach_controller" 00:31:02.367 },{ 00:31:02.367 "params": { 00:31:02.367 "name": "Nvme1", 00:31:02.367 "trtype": "tcp", 00:31:02.367 "traddr": "10.0.0.2", 00:31:02.367 "adrfam": "ipv4", 00:31:02.367 "trsvcid": "4420", 00:31:02.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:02.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:02.367 "hdgst": false, 00:31:02.367 "ddgst": false 00:31:02.367 }, 00:31:02.367 "method": "bdev_nvme_attach_controller" 00:31:02.367 }' 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:02.367 11:38:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:02.367 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:02.367 ... 00:31:02.367 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:02.367 ... 00:31:02.367 fio-3.35 00:31:02.367 Starting 4 threads 00:31:02.367 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.668 00:31:07.668 filename0: (groupid=0, jobs=1): err= 0: pid=2250592: Fri Jul 26 11:39:02 2024 00:31:07.668 read: IOPS=1751, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5002msec) 00:31:07.668 slat (nsec): min=4298, max=88497, avg=16062.42, stdev=8518.04 00:31:07.668 clat (usec): min=1387, max=8132, avg=4517.06, stdev=693.20 00:31:07.668 lat (usec): min=1413, max=8141, avg=4533.13, stdev=692.52 00:31:07.668 clat percentiles (usec): 00:31:07.668 | 1.00th=[ 3163], 5.00th=[ 3687], 10.00th=[ 3884], 20.00th=[ 4047], 00:31:07.668 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:31:07.668 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5342], 95.00th=[ 6128], 00:31:07.668 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 7767], 00:31:07.668 | 99.99th=[ 8160] 00:31:07.668 bw ( KiB/s): min=13664, max=14784, per=24.82%, avg=13975.11, stdev=336.01, samples=9 00:31:07.668 iops : min= 1708, max= 1848, avg=1746.89, stdev=42.00, samples=9 00:31:07.668 lat (msec) : 2=0.06%, 4=16.89%, 10=83.05% 00:31:07.668 cpu : usr=94.78%, sys=4.72%, ctx=13, majf=0, minf=54 00:31:07.668 IO depths : 1=0.1%, 2=3.7%, 4=68.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 issued rwts: total=8761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.668 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.668 filename0: (groupid=0, jobs=1): err= 0: pid=2250593: Fri Jul 26 11:39:02 2024 00:31:07.668 read: IOPS=1796, BW=14.0MiB/s (14.7MB/s)(70.2MiB/5004msec) 00:31:07.668 slat (nsec): min=4170, max=60310, avg=14403.07, stdev=7327.60 00:31:07.668 clat (usec): min=2145, max=9062, avg=4408.42, stdev=707.25 00:31:07.668 lat (usec): min=2160, max=9083, avg=4422.82, stdev=707.43 00:31:07.668 clat percentiles (usec): 00:31:07.668 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3949], 00:31:07.668 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4490], 00:31:07.668 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5342], 95.00th=[ 5932], 00:31:07.668 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 8094], 99.95th=[ 8848], 00:31:07.668 | 99.99th=[ 9110] 00:31:07.668 bw ( KiB/s): min=13888, max=14832, per=25.53%, avg=14377.60, stdev=339.68, samples=10 00:31:07.668 iops : min= 1736, max= 1854, avg=1797.20, stdev=42.46, samples=10 00:31:07.668 lat (msec) : 4=22.21%, 10=77.79% 00:31:07.668 cpu : usr=94.74%, sys=4.78%, ctx=9, majf=0, minf=31 00:31:07.668 IO depths : 1=0.1%, 2=3.8%, 4=66.4%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 issued rwts: total=8991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.668 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.668 filename1: (groupid=0, jobs=1): err= 0: pid=2250594: Fri Jul 26 11:39:02 2024 00:31:07.668 read: IOPS=1783, BW=13.9MiB/s (14.6MB/s)(69.7MiB/5003msec) 00:31:07.668 slat (nsec): min=4417, max=60135, avg=18140.83, stdev=8347.46 00:31:07.668 clat (usec): min=962, max=8225, avg=4427.66, stdev=777.85 00:31:07.668 lat (usec): min=982, max=8238, avg=4445.80, stdev=777.69 00:31:07.668 clat percentiles (usec): 00:31:07.668 | 1.00th=[ 2933], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3884], 00:31:07.668 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:31:07.668 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5669], 95.00th=[ 6128], 00:31:07.668 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 8029], 00:31:07.668 | 99.99th=[ 8225] 00:31:07.668 bw ( KiB/s): min=13824, max=15168, per=25.34%, avg=14268.30, stdev=398.90, samples=10 00:31:07.668 iops : min= 1728, max= 1896, avg=1783.50, stdev=49.90, samples=10 00:31:07.668 lat (usec) : 1000=0.01% 00:31:07.668 lat (msec) : 2=0.06%, 4=25.26%, 10=74.68% 00:31:07.668 cpu : usr=95.24%, sys=4.22%, ctx=8, majf=0, minf=38 00:31:07.668 IO depths : 1=0.1%, 2=4.5%, 4=68.0%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 issued rwts: total=8924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.668 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.668 filename1: (groupid=0, jobs=1): err= 0: pid=2250595: Fri Jul 26 11:39:02 2024 00:31:07.668 read: IOPS=1708, BW=13.3MiB/s (14.0MB/s)(66.8MiB/5002msec) 00:31:07.668 slat (nsec): min=4219, max=61407, avg=14519.38, stdev=7638.70 00:31:07.668 clat (usec): min=1707, max=8421, avg=4638.55, stdev=832.27 00:31:07.668 lat (usec): min=1716, max=8461, avg=4653.07, stdev=831.57 00:31:07.668 clat percentiles (usec): 00:31:07.668 | 1.00th=[ 3359], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4080], 00:31:07.668 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:31:07.668 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 6128], 95.00th=[ 6456], 00:31:07.668 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 8094], 99.95th=[ 8225], 00:31:07.668 | 99.99th=[ 8455] 00:31:07.668 bw ( KiB/s): min=13296, max=14176, per=24.24%, avg=13651.56, stdev=280.55, samples=9 00:31:07.668 iops : min= 1662, max= 1772, avg=1706.44, stdev=35.07, samples=9 00:31:07.668 lat (msec) : 2=0.09%, 4=9.97%, 10=89.94% 00:31:07.668 cpu : usr=94.84%, sys=4.70%, ctx=7, majf=0, minf=45 00:31:07.668 IO depths : 1=0.2%, 2=0.8%, 4=72.1%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.668 issued rwts: total=8546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.668 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:07.668 00:31:07.669 Run status group 0 (all jobs): 00:31:07.669 READ: bw=55.0MiB/s (57.7MB/s), 13.3MiB/s-14.0MiB/s (14.0MB/s-14.7MB/s), io=275MiB (289MB), run=5002-5004msec 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 00:31:07.669 real 0m24.323s 00:31:07.669 user 4m30.070s 00:31:07.669 sys 0m8.161s 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:07.669 11:39:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 ************************************ 00:31:07.669 END TEST fio_dif_rand_params 00:31:07.669 ************************************ 00:31:07.669 11:39:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:07.669 11:39:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:07.669 11:39:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:07.669 11:39:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 ************************************ 00:31:07.669 START TEST fio_dif_digest 00:31:07.669 ************************************ 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 bdev_null0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:07.669 [2024-07-26 11:39:03.087449] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:07.669 { 00:31:07.669 "params": { 00:31:07.669 "name": "Nvme$subsystem", 00:31:07.669 "trtype": "$TEST_TRANSPORT", 00:31:07.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:07.669 "adrfam": "ipv4", 00:31:07.669 "trsvcid": "$NVMF_PORT", 00:31:07.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:07.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:07.669 "hdgst": ${hdgst:-false}, 00:31:07.669 "ddgst": ${ddgst:-false} 00:31:07.669 }, 00:31:07.669 "method": "bdev_nvme_attach_controller" 00:31:07.669 } 00:31:07.669 EOF 00:31:07.669 )") 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:07.669 "params": { 00:31:07.669 "name": "Nvme0", 00:31:07.669 "trtype": "tcp", 00:31:07.669 "traddr": "10.0.0.2", 00:31:07.669 "adrfam": "ipv4", 00:31:07.669 "trsvcid": "4420", 00:31:07.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.669 "hdgst": true, 00:31:07.669 "ddgst": true 00:31:07.669 }, 00:31:07.669 "method": "bdev_nvme_attach_controller" 00:31:07.669 }' 00:31:07.669 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:07.670 11:39:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.926 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:07.926 ... 00:31:07.926 fio-3.35 00:31:07.926 Starting 3 threads 00:31:07.926 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.148 00:31:20.148 filename0: (groupid=0, jobs=1): err= 0: pid=2251580: Fri Jul 26 11:39:14 2024 00:31:20.148 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10049msec) 00:31:20.148 slat (nsec): min=5590, max=54326, avg=15665.65, stdev=2574.78 00:31:20.148 clat (usec): min=9390, max=53149, avg=15908.19, stdev=1794.27 00:31:20.148 lat (usec): min=9405, max=53171, avg=15923.86, stdev=1794.33 00:31:20.148 clat percentiles (usec): 00:31:20.148 | 1.00th=[10814], 5.00th=[13698], 10.00th=[14484], 20.00th=[15139], 00:31:20.148 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:31:20.148 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:31:20.148 | 99.00th=[19006], 99.50th=[19268], 99.90th=[50594], 99.95th=[53216], 00:31:20.148 | 99.99th=[53216] 00:31:20.148 bw ( KiB/s): min=23296, max=25856, per=33.29%, avg=24155.95, stdev=643.59, samples=20 00:31:20.148 iops : min= 182, max= 202, avg=188.70, stdev= 5.04, samples=20 00:31:20.148 lat (msec) : 10=0.11%, 20=99.74%, 50=0.05%, 100=0.11% 00:31:20.148 cpu : usr=92.76%, sys=6.74%, ctx=23, majf=0, minf=170 00:31:20.148 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.148 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.148 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:20.148 filename0: (groupid=0, jobs=1): err= 0: pid=2251581: Fri Jul 26 11:39:14 2024 00:31:20.148 read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(233MiB/10051msec) 00:31:20.148 slat (nsec): min=5176, max=34471, avg=16207.68, stdev=2151.51 00:31:20.148 clat (usec): min=9767, max=55840, avg=16139.74, stdev=1911.19 00:31:20.148 lat (usec): min=9782, max=55856, avg=16155.95, stdev=1911.21 00:31:20.148 clat percentiles (usec): 00:31:20.148 | 1.00th=[10945], 5.00th=[13829], 10.00th=[14484], 20.00th=[15139], 00:31:20.148 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:31:20.148 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:31:20.148 | 99.00th=[19268], 99.50th=[19792], 99.90th=[53216], 99.95th=[55837], 00:31:20.148 | 99.99th=[55837] 00:31:20.148 bw ( KiB/s): min=23040, max=24832, per=32.81%, avg=23808.00, stdev=518.69, samples=20 00:31:20.148 iops : min= 180, max= 194, avg=186.00, stdev= 4.05, samples=20 00:31:20.148 lat (msec) : 10=0.16%, 20=99.41%, 50=0.32%, 100=0.11% 00:31:20.148 cpu : usr=93.43%, sys=6.03%, ctx=50, majf=0, minf=111 00:31:20.148 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.148 issued rwts: total=1863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.148 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:20.148 filename0: (groupid=0, jobs=1): err= 0: pid=2251582: Fri Jul 26 11:39:14 2024 00:31:20.148 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(243MiB/10009msec) 00:31:20.148 slat (nsec): min=5778, max=51191, avg=20606.30, stdev=3843.18 00:31:20.148 clat (usec): min=11173, max=57733, avg=15410.85, stdev=3781.67 00:31:20.148 lat (usec): min=11194, max=57755, avg=15431.46, stdev=3781.63 00:31:20.148 clat percentiles (usec): 00:31:20.148 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:31:20.148 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:31:20.148 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:31:20.148 | 99.00th=[19268], 99.50th=[55837], 99.90th=[57410], 99.95th=[57934], 00:31:20.148 | 99.99th=[57934] 00:31:20.148 bw ( KiB/s): min=22016, max=26112, per=34.26%, avg=24857.60, stdev=1211.90, samples=20 00:31:20.148 iops : min= 172, max= 204, avg=194.20, stdev= 9.47, samples=20 00:31:20.148 lat (msec) : 20=99.07%, 50=0.15%, 100=0.77% 00:31:20.148 cpu : usr=89.88%, sys=8.61%, ctx=504, majf=0, minf=74 00:31:20.148 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.148 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.148 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:20.148 00:31:20.148 Run status group 0 (all jobs): 00:31:20.148 READ: bw=70.9MiB/s (74.3MB/s), 23.2MiB/s-24.3MiB/s (24.3MB/s-25.5MB/s), io=712MiB (747MB), run=10009-10051msec 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.148 00:31:20.148 real 0m11.384s 00:31:20.148 user 0m29.113s 00:31:20.148 sys 0m2.481s 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.148 11:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:20.148 ************************************ 00:31:20.148 END TEST fio_dif_digest 00:31:20.148 ************************************ 00:31:20.148 11:39:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:20.148 11:39:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:20.148 rmmod nvme_tcp 00:31:20.148 rmmod nvme_fabrics 00:31:20.148 rmmod nvme_keyring 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2245297 ']' 00:31:20.148 11:39:14 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2245297 00:31:20.148 11:39:14 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2245297 ']' 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2245297 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2245297 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2245297' 00:31:20.149 killing process with pid 2245297 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2245297 00:31:20.149 11:39:14 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2245297 00:31:20.149 11:39:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:20.149 11:39:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:20.715 Waiting for block devices as requested 00:31:20.715 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:20.974 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:20.974 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:21.233 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:21.233 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:21.233 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:21.233 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:21.492 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:21.492 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:21.492 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:21.492 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:21.752 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:21.752 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:21.752 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:22.012 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:22.012 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:22.012 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:22.272 11:39:17 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.272 11:39:17 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.272 11:39:17 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.272 11:39:17 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.272 11:39:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.272 11:39:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:22.272 11:39:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.175 11:39:19 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:24.175 00:31:24.175 real 1m8.740s 00:31:24.175 user 6m27.091s 00:31:24.175 sys 0m21.728s 00:31:24.175 11:39:19 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.175 11:39:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:24.175 ************************************ 00:31:24.175 END TEST nvmf_dif 00:31:24.175 ************************************ 00:31:24.175 11:39:19 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:24.175 11:39:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:24.175 11:39:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.175 11:39:19 -- common/autotest_common.sh@10 -- # set +x 00:31:24.175 ************************************ 00:31:24.175 START TEST nvmf_abort_qd_sizes 00:31:24.175 ************************************ 00:31:24.175 11:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:24.434 * Looking for test storage... 00:31:24.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.434 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:24.435 11:39:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:26.970 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:26.970 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:26.970 Found net devices under 0000:84:00.0: cvl_0_0 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:26.970 Found net devices under 0000:84:00.1: cvl_0_1 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.970 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.971 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:27.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:31:27.230 00:31:27.230 --- 10.0.0.2 ping statistics --- 00:31:27.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.230 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:31:27.230 00:31:27.230 --- 10.0.0.1 ping statistics --- 00:31:27.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.230 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:27.230 11:39:22 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:28.606 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:28.606 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:28.606 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:28.606 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:28.606 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:28.864 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:28.864 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:28.864 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:28.864 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:28.864 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:29.825 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2257039 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2257039 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2257039 ']' 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:29.825 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:29.825 [2024-07-26 11:39:25.443929] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:31:29.825 [2024-07-26 11:39:25.444025] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.100 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.101 [2024-07-26 11:39:25.520211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.101 [2024-07-26 11:39:25.644349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.101 [2024-07-26 11:39:25.644413] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.101 [2024-07-26 11:39:25.644435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.101 [2024-07-26 11:39:25.644451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.101 [2024-07-26 11:39:25.644463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.101 [2024-07-26 11:39:25.644521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.101 [2024-07-26 11:39:25.644573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.101 [2024-07-26 11:39:25.644623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.101 [2024-07-26 11:39:25.644626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:30.359 11:39:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:30.359 ************************************ 00:31:30.359 START TEST spdk_target_abort 00:31:30.359 ************************************ 00:31:30.359 11:39:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:30.359 11:39:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:30.359 11:39:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:31:30.359 11:39:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.359 11:39:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.639 spdk_targetn1 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.639 [2024-07-26 11:39:28.700398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.639 [2024-07-26 11:39:28.732747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:33.639 11:39:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:33.639 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.920 Initializing NVMe Controllers 00:31:36.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:36.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:36.920 Initialization complete. Launching workers. 00:31:36.920 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8998, failed: 0 00:31:36.920 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1394, failed to submit 7604 00:31:36.920 success 777, unsuccessful 617, failed 0 00:31:36.921 11:39:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:36.921 11:39:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.921 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.199 Initializing NVMe Controllers 00:31:40.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:40.199 Initialization complete. Launching workers. 00:31:40.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8419, failed: 0 00:31:40.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1268, failed to submit 7151 00:31:40.199 success 312, unsuccessful 956, failed 0 00:31:40.199 11:39:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:40.199 11:39:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.199 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.480 Initializing NVMe Controllers 00:31:43.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.480 Initialization complete. Launching workers. 00:31:43.480 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29794, failed: 0 00:31:43.480 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2699, failed to submit 27095 00:31:43.480 success 501, unsuccessful 2198, failed 0 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.480 11:39:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:44.412 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.412 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2257039 00:31:44.412 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2257039 ']' 00:31:44.412 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2257039 00:31:44.412 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2257039 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2257039' 00:31:44.670 killing process with pid 2257039 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2257039 00:31:44.670 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2257039 00:31:44.929 00:31:44.929 real 0m14.542s 00:31:44.929 user 0m54.923s 00:31:44.929 sys 0m2.930s 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:44.929 ************************************ 00:31:44.929 END TEST spdk_target_abort 00:31:44.929 ************************************ 00:31:44.929 11:39:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:44.929 11:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:44.929 11:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:44.929 11:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:44.929 ************************************ 00:31:44.929 START TEST kernel_target_abort 00:31:44.929 ************************************ 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:44.929 11:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:46.304 Waiting for block devices as requested 00:31:46.561 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:46.561 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:46.819 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:46.819 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:46.819 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:46.819 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:47.077 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:47.077 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:47.077 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:47.077 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:47.334 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:47.334 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:47.334 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:47.592 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:47.592 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:47.592 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:47.592 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:47.851 No valid GPT data, bailing 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:47.851 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:31:48.108 00:31:48.108 Discovery Log Number of Records 2, Generation counter 2 00:31:48.108 =====Discovery Log Entry 0====== 00:31:48.108 trtype: tcp 00:31:48.108 adrfam: ipv4 00:31:48.108 subtype: current discovery subsystem 00:31:48.108 treq: not specified, sq flow control disable supported 00:31:48.108 portid: 1 00:31:48.108 trsvcid: 4420 00:31:48.108 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:48.108 traddr: 10.0.0.1 00:31:48.108 eflags: none 00:31:48.108 sectype: none 00:31:48.108 =====Discovery Log Entry 1====== 00:31:48.108 trtype: tcp 00:31:48.108 adrfam: ipv4 00:31:48.108 subtype: nvme subsystem 00:31:48.108 treq: not specified, sq flow control disable supported 00:31:48.108 portid: 1 00:31:48.108 trsvcid: 4420 00:31:48.108 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:48.108 traddr: 10.0.0.1 00:31:48.108 eflags: none 00:31:48.108 sectype: none 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:48.108 11:39:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:48.108 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.419 Initializing NVMe Controllers 00:31:51.419 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:51.419 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:51.419 Initialization complete. Launching workers. 00:31:51.419 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33737, failed: 0 00:31:51.419 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33737, failed to submit 0 00:31:51.419 success 0, unsuccessful 33737, failed 0 00:31:51.419 11:39:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:51.419 11:39:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:51.419 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.697 Initializing NVMe Controllers 00:31:54.697 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:54.697 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:54.697 Initialization complete. Launching workers. 00:31:54.697 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65792, failed: 0 00:31:54.697 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16602, failed to submit 49190 00:31:54.697 success 0, unsuccessful 16602, failed 0 00:31:54.697 11:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:54.697 11:39:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:54.697 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.980 Initializing NVMe Controllers 00:31:57.980 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:57.980 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:57.980 Initialization complete. Launching workers. 00:31:57.980 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64262, failed: 0 00:31:57.980 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16050, failed to submit 48212 00:31:57.980 success 0, unsuccessful 16050, failed 0 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:57.980 11:39:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:58.917 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:58.917 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:58.917 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:58.917 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:58.917 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:58.917 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:58.917 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.175 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.175 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.175 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:00.112 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:00.112 00:32:00.112 real 0m15.152s 00:32:00.112 user 0m5.680s 00:32:00.112 sys 0m3.966s 00:32:00.112 11:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:00.112 11:39:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:00.112 ************************************ 00:32:00.112 END TEST kernel_target_abort 00:32:00.112 ************************************ 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.112 rmmod nvme_tcp 00:32:00.112 rmmod nvme_fabrics 00:32:00.112 rmmod nvme_keyring 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2257039 ']' 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2257039 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2257039 ']' 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2257039 00:32:00.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2257039) - No such process 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2257039 is not found' 00:32:00.112 Process with pid 2257039 is not found 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:00.112 11:39:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:01.488 Waiting for block devices as requested 00:32:01.488 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:01.745 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:01.745 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:02.003 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:02.003 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.003 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.261 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.261 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:02.261 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:02.261 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:02.520 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:02.520 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:02.520 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.520 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.779 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.779 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:02.779 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:03.039 11:39:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.943 11:40:00 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:04.943 00:32:04.943 real 0m40.742s 00:32:04.943 user 1m3.140s 00:32:04.943 sys 0m11.448s 00:32:04.943 11:40:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.943 11:40:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:04.943 ************************************ 00:32:04.943 END TEST nvmf_abort_qd_sizes 00:32:04.943 ************************************ 00:32:04.943 11:40:00 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:04.943 11:40:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:04.943 11:40:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:04.943 11:40:00 -- common/autotest_common.sh@10 -- # set +x 00:32:05.203 ************************************ 00:32:05.203 START TEST keyring_file 00:32:05.203 ************************************ 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:05.203 * Looking for test storage... 00:32:05.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.203 11:40:00 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.203 11:40:00 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.203 11:40:00 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.203 11:40:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.203 11:40:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.203 11:40:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.203 11:40:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:05.203 11:40:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.65MnHXLhXM 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.65MnHXLhXM 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.65MnHXLhXM 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.65MnHXLhXM 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.M1DLHAYnqe 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:05.203 11:40:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M1DLHAYnqe 00:32:05.203 11:40:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.M1DLHAYnqe 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.M1DLHAYnqe 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=2262813 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:05.203 11:40:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2262813 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2262813 ']' 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.203 11:40:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:05.462 [2024-07-26 11:40:00.915904] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:32:05.462 [2024-07-26 11:40:00.916009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262813 ] 00:32:05.462 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.462 [2024-07-26 11:40:00.998370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.462 [2024-07-26 11:40:01.122237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:06.028 11:40:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.028 [2024-07-26 11:40:01.408934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.028 null0 00:32:06.028 [2024-07-26 11:40:01.440995] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:06.028 [2024-07-26 11:40:01.441521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:06.028 [2024-07-26 11:40:01.448993] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.028 11:40:01 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.028 [2024-07-26 11:40:01.457009] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:06.028 request: 00:32:06.028 { 00:32:06.028 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.028 "secure_channel": false, 00:32:06.028 "listen_address": { 00:32:06.028 "trtype": "tcp", 00:32:06.028 "traddr": "127.0.0.1", 00:32:06.028 "trsvcid": "4420" 00:32:06.028 }, 00:32:06.028 "method": "nvmf_subsystem_add_listener", 00:32:06.028 "req_id": 1 00:32:06.028 } 00:32:06.028 Got JSON-RPC error response 00:32:06.028 response: 00:32:06.028 { 00:32:06.028 "code": -32602, 00:32:06.028 "message": "Invalid parameters" 00:32:06.028 } 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:06.028 11:40:01 keyring_file -- keyring/file.sh@46 -- # bperfpid=2262921 00:32:06.028 11:40:01 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:06.028 11:40:01 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2262921 /var/tmp/bperf.sock 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2262921 ']' 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:06.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.028 11:40:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.028 [2024-07-26 11:40:01.508549] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:32:06.028 [2024-07-26 11:40:01.508633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262921 ] 00:32:06.028 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.028 [2024-07-26 11:40:01.574170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.287 [2024-07-26 11:40:01.696216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.287 11:40:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.287 11:40:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:06.287 11:40:01 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:06.287 11:40:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:06.545 11:40:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M1DLHAYnqe 00:32:06.545 11:40:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M1DLHAYnqe 00:32:07.110 11:40:02 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:07.110 11:40:02 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:07.110 11:40:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.110 11:40:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.110 11:40:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.110 11:40:02 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.65MnHXLhXM == \/\t\m\p\/\t\m\p\.\6\5\M\n\H\X\L\h\X\M ]] 00:32:07.110 11:40:02 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:07.110 11:40:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:07.110 11:40:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:07.110 11:40:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.110 11:40:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.707 11:40:03 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.M1DLHAYnqe == \/\t\m\p\/\t\m\p\.\M\1\D\L\H\A\Y\n\q\e ]] 00:32:07.707 11:40:03 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:07.707 11:40:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:07.707 11:40:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.707 11:40:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.707 11:40:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.707 11:40:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.964 11:40:03 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:07.964 11:40:03 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:07.964 11:40:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:07.964 11:40:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.964 11:40:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:07.964 11:40:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.964 11:40:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.528 11:40:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:08.528 11:40:03 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:08.528 11:40:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:08.786 [2024-07-26 11:40:04.288601] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:08.786 nvme0n1 00:32:08.786 11:40:04 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:08.786 11:40:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:08.786 11:40:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.786 11:40:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.786 11:40:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.786 11:40:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:09.043 11:40:04 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:09.043 11:40:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:09.043 11:40:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:09.043 11:40:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:09.043 11:40:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.043 11:40:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.043 11:40:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:09.609 11:40:05 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:09.609 11:40:05 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:09.609 Running I/O for 1 seconds... 00:32:10.541 00:32:10.541 Latency(us) 00:32:10.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.541 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:10.541 nvme0n1 : 1.02 4766.03 18.62 0.00 0.00 26542.35 8107.05 30874.74 00:32:10.541 =================================================================================================================== 00:32:10.541 Total : 4766.03 18.62 0.00 0.00 26542.35 8107.05 30874.74 00:32:10.541 0 00:32:10.541 11:40:06 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:10.541 11:40:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:11.107 11:40:06 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:11.107 11:40:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:11.107 11:40:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.107 11:40:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.107 11:40:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.107 11:40:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:11.673 11:40:07 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:11.673 11:40:07 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:11.673 11:40:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:11.673 11:40:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.673 11:40:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.673 11:40:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.673 11:40:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:12.238 11:40:07 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:12.238 11:40:07 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:12.238 11:40:07 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.238 11:40:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:12.497 [2024-07-26 11:40:08.063928] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:12.497 [2024-07-26 11:40:08.064382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf497a0 (107): Transport endpoint is not connected 00:32:12.497 [2024-07-26 11:40:08.065373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf497a0 (9): Bad file descriptor 00:32:12.497 [2024-07-26 11:40:08.066372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.497 [2024-07-26 11:40:08.066400] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:12.497 [2024-07-26 11:40:08.066416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.497 request: 00:32:12.497 { 00:32:12.497 "name": "nvme0", 00:32:12.497 "trtype": "tcp", 00:32:12.497 "traddr": "127.0.0.1", 00:32:12.497 "adrfam": "ipv4", 00:32:12.497 "trsvcid": "4420", 00:32:12.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:12.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:12.497 "prchk_reftag": false, 00:32:12.497 "prchk_guard": false, 00:32:12.497 "hdgst": false, 00:32:12.497 "ddgst": false, 00:32:12.497 "psk": "key1", 00:32:12.497 "method": "bdev_nvme_attach_controller", 00:32:12.497 "req_id": 1 00:32:12.497 } 00:32:12.497 Got JSON-RPC error response 00:32:12.497 response: 00:32:12.497 { 00:32:12.497 "code": -5, 00:32:12.497 "message": "Input/output error" 00:32:12.497 } 00:32:12.497 11:40:08 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:12.497 11:40:08 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:12.497 11:40:08 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:12.497 11:40:08 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:12.497 11:40:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:12.497 11:40:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.497 11:40:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:12.497 11:40:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.497 11:40:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.497 11:40:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.062 11:40:08 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:13.063 11:40:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:13.063 11:40:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.063 11:40:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:13.063 11:40:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.063 11:40:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:13.063 11:40:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.063 11:40:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:13.063 11:40:08 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:13.063 11:40:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:13.628 11:40:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:13.628 11:40:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:13.885 11:40:09 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:13.885 11:40:09 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:13.885 11:40:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.143 11:40:09 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:14.143 11:40:09 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.65MnHXLhXM 00:32:14.143 11:40:09 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.143 11:40:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:14.143 11:40:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:14.401 [2024-07-26 11:40:09.999271] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.65MnHXLhXM': 0100660 00:32:14.401 [2024-07-26 11:40:09.999311] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:14.401 request: 00:32:14.401 { 00:32:14.401 "name": "key0", 00:32:14.401 "path": "/tmp/tmp.65MnHXLhXM", 00:32:14.401 "method": "keyring_file_add_key", 00:32:14.401 "req_id": 1 00:32:14.401 } 00:32:14.401 Got JSON-RPC error response 00:32:14.401 response: 00:32:14.401 { 00:32:14.401 "code": -1, 00:32:14.401 "message": "Operation not permitted" 00:32:14.401 } 00:32:14.401 11:40:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:14.402 11:40:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:14.402 11:40:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:14.402 11:40:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:14.402 11:40:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.65MnHXLhXM 00:32:14.402 11:40:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:14.402 11:40:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.65MnHXLhXM 00:32:14.967 11:40:10 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.65MnHXLhXM 00:32:14.967 11:40:10 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:14.967 11:40:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:14.967 11:40:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:14.967 11:40:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:14.967 11:40:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:14.967 11:40:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:15.225 11:40:10 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:15.225 11:40:10 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.225 11:40:10 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.225 11:40:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:15.484 [2024-07-26 11:40:11.070138] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.65MnHXLhXM': No such file or directory 00:32:15.484 [2024-07-26 11:40:11.070184] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:15.484 [2024-07-26 11:40:11.070216] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:15.484 [2024-07-26 11:40:11.070228] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:15.484 [2024-07-26 11:40:11.070242] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:15.484 request: 00:32:15.484 { 00:32:15.484 "name": "nvme0", 00:32:15.484 "trtype": "tcp", 00:32:15.484 "traddr": "127.0.0.1", 00:32:15.484 "adrfam": "ipv4", 00:32:15.484 "trsvcid": "4420", 00:32:15.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.484 "prchk_reftag": false, 00:32:15.484 "prchk_guard": false, 00:32:15.484 "hdgst": false, 00:32:15.484 "ddgst": false, 00:32:15.484 "psk": "key0", 00:32:15.484 "method": "bdev_nvme_attach_controller", 00:32:15.484 "req_id": 1 00:32:15.484 } 00:32:15.484 Got JSON-RPC error response 00:32:15.484 response: 00:32:15.484 { 00:32:15.484 "code": -19, 00:32:15.484 "message": "No such device" 00:32:15.484 } 00:32:15.484 11:40:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:15.484 11:40:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.484 11:40:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.484 11:40:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.484 11:40:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:15.484 11:40:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:16.050 11:40:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WhuYkea1PG 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:16.050 11:40:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:16.050 11:40:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:16.050 11:40:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:16.050 11:40:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:16.050 11:40:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:16.050 11:40:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WhuYkea1PG 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WhuYkea1PG 00:32:16.050 11:40:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.WhuYkea1PG 00:32:16.050 11:40:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WhuYkea1PG 00:32:16.050 11:40:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WhuYkea1PG 00:32:16.308 11:40:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.308 11:40:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:16.873 nvme0n1 00:32:16.873 11:40:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:16.873 11:40:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.873 11:40:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.873 11:40:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.873 11:40:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.873 11:40:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.131 11:40:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:17.131 11:40:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:17.131 11:40:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:17.697 11:40:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:17.697 11:40:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:17.697 11:40:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.697 11:40:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.697 11:40:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.954 11:40:13 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:17.954 11:40:13 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:17.954 11:40:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.954 11:40:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.954 11:40:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.954 11:40:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.954 11:40:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:18.519 11:40:13 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:18.519 11:40:13 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:18.519 11:40:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:18.777 11:40:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:18.777 11:40:14 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:18.777 11:40:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.034 11:40:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:19.034 11:40:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WhuYkea1PG 00:32:19.034 11:40:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WhuYkea1PG 00:32:19.601 11:40:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M1DLHAYnqe 00:32:19.601 11:40:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M1DLHAYnqe 00:32:19.859 11:40:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.859 11:40:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.425 nvme0n1 00:32:20.425 11:40:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:20.425 11:40:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:20.683 11:40:16 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:20.683 "subsystems": [ 00:32:20.683 { 00:32:20.683 "subsystem": "keyring", 00:32:20.683 "config": [ 00:32:20.683 { 00:32:20.683 "method": "keyring_file_add_key", 00:32:20.683 "params": { 00:32:20.683 "name": "key0", 00:32:20.683 "path": "/tmp/tmp.WhuYkea1PG" 00:32:20.683 } 00:32:20.683 }, 00:32:20.683 { 00:32:20.683 "method": "keyring_file_add_key", 00:32:20.683 "params": { 00:32:20.683 "name": "key1", 00:32:20.683 "path": "/tmp/tmp.M1DLHAYnqe" 00:32:20.683 } 00:32:20.683 } 00:32:20.683 ] 00:32:20.683 }, 00:32:20.683 { 00:32:20.683 "subsystem": "iobuf", 00:32:20.683 "config": [ 00:32:20.683 { 00:32:20.683 "method": "iobuf_set_options", 00:32:20.683 "params": { 00:32:20.683 "small_pool_count": 8192, 00:32:20.683 "large_pool_count": 1024, 00:32:20.683 "small_bufsize": 8192, 00:32:20.683 "large_bufsize": 135168 00:32:20.683 } 00:32:20.683 } 00:32:20.683 ] 00:32:20.683 }, 00:32:20.683 { 00:32:20.683 "subsystem": "sock", 00:32:20.683 "config": [ 00:32:20.683 { 00:32:20.683 "method": "sock_set_default_impl", 00:32:20.683 "params": { 00:32:20.683 "impl_name": "posix" 00:32:20.683 } 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "method": "sock_impl_set_options", 00:32:20.684 "params": { 00:32:20.684 "impl_name": "ssl", 00:32:20.684 "recv_buf_size": 4096, 00:32:20.684 "send_buf_size": 4096, 00:32:20.684 "enable_recv_pipe": true, 00:32:20.684 "enable_quickack": false, 00:32:20.684 "enable_placement_id": 0, 00:32:20.684 "enable_zerocopy_send_server": true, 00:32:20.684 "enable_zerocopy_send_client": false, 00:32:20.684 "zerocopy_threshold": 0, 00:32:20.684 "tls_version": 0, 00:32:20.684 "enable_ktls": false 00:32:20.684 } 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "method": "sock_impl_set_options", 00:32:20.684 "params": { 00:32:20.684 "impl_name": "posix", 00:32:20.684 "recv_buf_size": 2097152, 00:32:20.684 "send_buf_size": 2097152, 00:32:20.684 "enable_recv_pipe": true, 00:32:20.684 "enable_quickack": false, 00:32:20.684 "enable_placement_id": 0, 00:32:20.684 "enable_zerocopy_send_server": true, 00:32:20.684 "enable_zerocopy_send_client": false, 00:32:20.684 "zerocopy_threshold": 0, 00:32:20.684 "tls_version": 0, 00:32:20.684 "enable_ktls": false 00:32:20.684 } 00:32:20.684 } 00:32:20.684 ] 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "subsystem": "vmd", 00:32:20.684 "config": [] 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "subsystem": "accel", 00:32:20.684 "config": [ 00:32:20.684 { 00:32:20.684 "method": "accel_set_options", 00:32:20.684 "params": { 00:32:20.684 "small_cache_size": 128, 00:32:20.684 "large_cache_size": 16, 00:32:20.684 "task_count": 2048, 00:32:20.684 "sequence_count": 2048, 00:32:20.684 "buf_count": 2048 00:32:20.684 } 00:32:20.684 } 00:32:20.684 ] 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "subsystem": "bdev", 00:32:20.684 "config": [ 00:32:20.684 { 00:32:20.684 "method": "bdev_set_options", 00:32:20.684 "params": { 00:32:20.684 "bdev_io_pool_size": 65535, 00:32:20.684 "bdev_io_cache_size": 256, 00:32:20.684 "bdev_auto_examine": true, 00:32:20.684 "iobuf_small_cache_size": 128, 00:32:20.684 "iobuf_large_cache_size": 16 00:32:20.684 } 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "method": "bdev_raid_set_options", 00:32:20.684 "params": { 00:32:20.684 "process_window_size_kb": 1024, 00:32:20.684 "process_max_bandwidth_mb_sec": 0 00:32:20.684 } 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "method": "bdev_iscsi_set_options", 00:32:20.684 "params": { 00:32:20.684 "timeout_sec": 30 00:32:20.684 } 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "method": "bdev_nvme_set_options", 00:32:20.684 "params": { 00:32:20.684 "action_on_timeout": "none", 00:32:20.684 "timeout_us": 0, 00:32:20.684 "timeout_admin_us": 0, 00:32:20.684 "keep_alive_timeout_ms": 10000, 00:32:20.684 "arbitration_burst": 0, 00:32:20.684 "low_priority_weight": 0, 00:32:20.684 "medium_priority_weight": 0, 00:32:20.684 "high_priority_weight": 0, 00:32:20.684 "nvme_adminq_poll_period_us": 10000, 00:32:20.684 "nvme_ioq_poll_period_us": 0, 00:32:20.684 "io_queue_requests": 512, 00:32:20.684 "delay_cmd_submit": true, 00:32:20.684 "transport_retry_count": 4, 00:32:20.684 "bdev_retry_count": 3, 00:32:20.684 "transport_ack_timeout": 0, 00:32:20.684 "ctrlr_loss_timeout_sec": 0, 00:32:20.684 "reconnect_delay_sec": 0, 00:32:20.684 "fast_io_fail_timeout_sec": 0, 00:32:20.684 "disable_auto_failback": false, 00:32:20.684 "generate_uuids": false, 00:32:20.684 "transport_tos": 0, 00:32:20.684 "nvme_error_stat": false, 00:32:20.684 "rdma_srq_size": 0, 00:32:20.684 "io_path_stat": false, 00:32:20.684 "allow_accel_sequence": false, 00:32:20.684 "rdma_max_cq_size": 0, 00:32:20.684 "rdma_cm_event_timeout_ms": 0, 00:32:20.684 "dhchap_digests": [ 00:32:20.684 "sha256", 00:32:20.684 "sha384", 00:32:20.684 "sha512" 00:32:20.684 ], 00:32:20.684 "dhchap_dhgroups": [ 00:32:20.684 "null", 00:32:20.684 "ffdhe2048", 00:32:20.684 "ffdhe3072", 00:32:20.684 "ffdhe4096", 00:32:20.684 "ffdhe6144", 00:32:20.684 "ffdhe8192" 00:32:20.684 ] 00:32:20.684 } 00:32:20.684 }, 00:32:20.684 { 00:32:20.684 "method": "bdev_nvme_attach_controller", 00:32:20.684 "params": { 00:32:20.684 "name": "nvme0", 00:32:20.684 "trtype": "TCP", 00:32:20.684 "adrfam": "IPv4", 00:32:20.684 "traddr": "127.0.0.1", 00:32:20.684 "trsvcid": "4420", 00:32:20.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:20.684 "prchk_reftag": false, 00:32:20.684 "prchk_guard": false, 00:32:20.684 "ctrlr_loss_timeout_sec": 0, 00:32:20.684 "reconnect_delay_sec": 0, 00:32:20.684 "fast_io_fail_timeout_sec": 0, 00:32:20.684 "psk": "key0", 00:32:20.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:20.684 "hdgst": false, 00:32:20.685 "ddgst": false 00:32:20.685 } 00:32:20.685 }, 00:32:20.685 { 00:32:20.685 "method": "bdev_nvme_set_hotplug", 00:32:20.685 "params": { 00:32:20.685 "period_us": 100000, 00:32:20.685 "enable": false 00:32:20.685 } 00:32:20.685 }, 00:32:20.685 { 00:32:20.685 "method": "bdev_wait_for_examine" 00:32:20.685 } 00:32:20.685 ] 00:32:20.685 }, 00:32:20.685 { 00:32:20.685 "subsystem": "nbd", 00:32:20.685 "config": [] 00:32:20.685 } 00:32:20.685 ] 00:32:20.685 }' 00:32:20.685 11:40:16 keyring_file -- keyring/file.sh@114 -- # killprocess 2262921 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2262921 ']' 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2262921 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2262921 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2262921' 00:32:20.685 killing process with pid 2262921 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@969 -- # kill 2262921 00:32:20.685 Received shutdown signal, test time was about 1.000000 seconds 00:32:20.685 00:32:20.685 Latency(us) 00:32:20.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.685 =================================================================================================================== 00:32:20.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:20.685 11:40:16 keyring_file -- common/autotest_common.sh@974 -- # wait 2262921 00:32:20.945 11:40:16 keyring_file -- keyring/file.sh@117 -- # bperfpid=2264720 00:32:20.945 11:40:16 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2264720 /var/tmp/bperf.sock 00:32:20.945 11:40:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2264720 ']' 00:32:20.945 11:40:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.945 11:40:16 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:20.945 11:40:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:20.945 11:40:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.945 11:40:16 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:20.945 "subsystems": [ 00:32:20.945 { 00:32:20.945 "subsystem": "keyring", 00:32:20.945 "config": [ 00:32:20.945 { 00:32:20.945 "method": "keyring_file_add_key", 00:32:20.945 "params": { 00:32:20.945 "name": "key0", 00:32:20.945 "path": "/tmp/tmp.WhuYkea1PG" 00:32:20.945 } 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "method": "keyring_file_add_key", 00:32:20.945 "params": { 00:32:20.945 "name": "key1", 00:32:20.945 "path": "/tmp/tmp.M1DLHAYnqe" 00:32:20.945 } 00:32:20.945 } 00:32:20.945 ] 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "subsystem": "iobuf", 00:32:20.945 "config": [ 00:32:20.945 { 00:32:20.945 "method": "iobuf_set_options", 00:32:20.945 "params": { 00:32:20.945 "small_pool_count": 8192, 00:32:20.945 "large_pool_count": 1024, 00:32:20.945 "small_bufsize": 8192, 00:32:20.945 "large_bufsize": 135168 00:32:20.945 } 00:32:20.945 } 00:32:20.945 ] 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "subsystem": "sock", 00:32:20.945 "config": [ 00:32:20.945 { 00:32:20.945 "method": "sock_set_default_impl", 00:32:20.945 "params": { 00:32:20.945 "impl_name": "posix" 00:32:20.945 } 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "method": "sock_impl_set_options", 00:32:20.945 "params": { 00:32:20.945 "impl_name": "ssl", 00:32:20.945 "recv_buf_size": 4096, 00:32:20.945 "send_buf_size": 4096, 00:32:20.945 "enable_recv_pipe": true, 00:32:20.945 "enable_quickack": false, 00:32:20.945 "enable_placement_id": 0, 00:32:20.945 "enable_zerocopy_send_server": true, 00:32:20.945 "enable_zerocopy_send_client": false, 00:32:20.945 "zerocopy_threshold": 0, 00:32:20.945 "tls_version": 0, 00:32:20.945 "enable_ktls": false 00:32:20.945 } 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "method": "sock_impl_set_options", 00:32:20.945 "params": { 00:32:20.945 "impl_name": "posix", 00:32:20.945 "recv_buf_size": 2097152, 00:32:20.945 "send_buf_size": 2097152, 00:32:20.945 "enable_recv_pipe": true, 00:32:20.945 "enable_quickack": false, 00:32:20.945 "enable_placement_id": 0, 00:32:20.945 "enable_zerocopy_send_server": true, 00:32:20.945 "enable_zerocopy_send_client": false, 00:32:20.945 "zerocopy_threshold": 0, 00:32:20.945 "tls_version": 0, 00:32:20.945 "enable_ktls": false 00:32:20.945 } 00:32:20.945 } 00:32:20.945 ] 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "subsystem": "vmd", 00:32:20.945 "config": [] 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "subsystem": "accel", 00:32:20.945 "config": [ 00:32:20.945 { 00:32:20.945 "method": "accel_set_options", 00:32:20.945 "params": { 00:32:20.945 "small_cache_size": 128, 00:32:20.945 "large_cache_size": 16, 00:32:20.945 "task_count": 2048, 00:32:20.945 "sequence_count": 2048, 00:32:20.945 "buf_count": 2048 00:32:20.945 } 00:32:20.945 } 00:32:20.945 ] 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "subsystem": "bdev", 00:32:20.945 "config": [ 00:32:20.945 { 00:32:20.945 "method": "bdev_set_options", 00:32:20.945 "params": { 00:32:20.945 "bdev_io_pool_size": 65535, 00:32:20.945 "bdev_io_cache_size": 256, 00:32:20.945 "bdev_auto_examine": true, 00:32:20.945 "iobuf_small_cache_size": 128, 00:32:20.945 "iobuf_large_cache_size": 16 00:32:20.945 } 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "method": "bdev_raid_set_options", 00:32:20.945 "params": { 00:32:20.945 "process_window_size_kb": 1024, 00:32:20.945 "process_max_bandwidth_mb_sec": 0 00:32:20.945 } 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "method": "bdev_iscsi_set_options", 00:32:20.945 "params": { 00:32:20.945 "timeout_sec": 30 00:32:20.945 } 00:32:20.945 }, 00:32:20.945 { 00:32:20.945 "method": "bdev_nvme_set_options", 00:32:20.945 "params": { 00:32:20.945 "action_on_timeout": "none", 00:32:20.945 "timeout_us": 0, 00:32:20.945 "timeout_admin_us": 0, 00:32:20.945 "keep_alive_timeout_ms": 10000, 00:32:20.945 "arbitration_burst": 0, 00:32:20.945 "low_priority_weight": 0, 00:32:20.945 "medium_priority_weight": 0, 00:32:20.945 "high_priority_weight": 0, 00:32:20.945 "nvme_adminq_poll_period_us": 10000, 00:32:20.945 "nvme_ioq_poll_period_us": 0, 00:32:20.945 "io_queue_requests": 512, 00:32:20.945 "delay_cmd_submit": true, 00:32:20.945 "transport_retry_count": 4, 00:32:20.945 "bdev_retry_count": 3, 00:32:20.945 "transport_ack_timeout": 0, 00:32:20.945 "ctrlr_loss_timeout_sec": 0, 00:32:20.945 "reconnect_delay_sec": 0, 00:32:20.945 "fast_io_fail_timeout_sec": 0, 00:32:20.945 "disable_auto_failback": false, 00:32:20.945 "generate_uuids": false, 00:32:20.945 "transport_tos": 0, 00:32:20.945 "nvme_error_stat": false, 00:32:20.945 "rdma_srq_size": 0, 00:32:20.945 "io_path_stat": false, 00:32:20.945 "allow_accel_sequence": false, 00:32:20.945 "rdma_max_cq_size": 0, 00:32:20.945 "rdma_cm_event_timeout_ms": 0, 00:32:20.945 "dhchap_digests": [ 00:32:20.945 "sha256", 00:32:20.945 "sha384", 00:32:20.945 "sha512" 00:32:20.945 ], 00:32:20.945 "dhchap_dhgroups": [ 00:32:20.945 "null", 00:32:20.945 "ffdhe2048", 00:32:20.945 "ffdhe3072", 00:32:20.946 "ffdhe4096", 00:32:20.946 "ffdhe6144", 00:32:20.946 "ffdhe8192" 00:32:20.946 ] 00:32:20.946 } 00:32:20.946 }, 00:32:20.946 { 00:32:20.946 "method": "bdev_nvme_attach_controller", 00:32:20.946 "params": { 00:32:20.946 "name": "nvme0", 00:32:20.946 "trtype": "TCP", 00:32:20.946 "adrfam": "IPv4", 00:32:20.946 "traddr": "127.0.0.1", 00:32:20.946 "trsvcid": "4420", 00:32:20.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:20.946 "prchk_reftag": false, 00:32:20.946 "prchk_guard": false, 00:32:20.946 "ctrlr_loss_timeout_sec": 0, 00:32:20.946 "reconnect_delay_sec": 0, 00:32:20.946 "fast_io_fail_timeout_sec": 0, 00:32:20.946 "psk": "key0", 00:32:20.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:20.946 "hdgst": false, 00:32:20.946 "ddgst": false 00:32:20.946 } 00:32:20.946 }, 00:32:20.946 { 00:32:20.946 "method": "bdev_nvme_set_hotplug", 00:32:20.946 "params": { 00:32:20.946 "period_us": 100000, 00:32:20.946 "enable": false 00:32:20.946 } 00:32:20.946 }, 00:32:20.946 { 00:32:20.946 "method": "bdev_wait_for_examine" 00:32:20.946 } 00:32:20.946 ] 00:32:20.946 }, 00:32:20.946 { 00:32:20.946 "subsystem": "nbd", 00:32:20.946 "config": [] 00:32:20.946 } 00:32:20.946 ] 00:32:20.946 }' 00:32:20.946 11:40:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:20.946 11:40:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:21.238 [2024-07-26 11:40:16.653312] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:32:21.238 [2024-07-26 11:40:16.653411] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264720 ] 00:32:21.238 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.238 [2024-07-26 11:40:16.721379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.238 [2024-07-26 11:40:16.843086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.496 [2024-07-26 11:40:17.039917] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:22.430 11:40:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:22.430 11:40:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:22.430 11:40:17 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:22.430 11:40:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.430 11:40:17 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:22.687 11:40:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:22.687 11:40:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:22.687 11:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.687 11:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.687 11:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.687 11:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.687 11:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.946 11:40:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:22.946 11:40:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:22.946 11:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.946 11:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.946 11:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.946 11:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.946 11:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.204 11:40:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:23.204 11:40:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:23.204 11:40:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:23.204 11:40:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:23.461 11:40:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:23.461 11:40:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:23.461 11:40:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WhuYkea1PG /tmp/tmp.M1DLHAYnqe 00:32:23.461 11:40:19 keyring_file -- keyring/file.sh@20 -- # killprocess 2264720 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2264720 ']' 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2264720 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2264720 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2264720' 00:32:23.461 killing process with pid 2264720 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@969 -- # kill 2264720 00:32:23.461 Received shutdown signal, test time was about 1.000000 seconds 00:32:23.461 00:32:23.461 Latency(us) 00:32:23.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.461 =================================================================================================================== 00:32:23.461 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:23.461 11:40:19 keyring_file -- common/autotest_common.sh@974 -- # wait 2264720 00:32:24.027 11:40:19 keyring_file -- keyring/file.sh@21 -- # killprocess 2262813 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2262813 ']' 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2262813 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2262813 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:24.027 11:40:19 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2262813' 00:32:24.027 killing process with pid 2262813 00:32:24.028 11:40:19 keyring_file -- common/autotest_common.sh@969 -- # kill 2262813 00:32:24.028 [2024-07-26 11:40:19.450782] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:24.028 11:40:19 keyring_file -- common/autotest_common.sh@974 -- # wait 2262813 00:32:24.596 00:32:24.596 real 0m19.338s 00:32:24.596 user 0m49.670s 00:32:24.596 sys 0m4.141s 00:32:24.596 11:40:19 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:24.596 11:40:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:24.596 ************************************ 00:32:24.596 END TEST keyring_file 00:32:24.596 ************************************ 00:32:24.596 11:40:19 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:24.596 11:40:19 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:24.596 11:40:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:24.596 11:40:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.596 11:40:19 -- common/autotest_common.sh@10 -- # set +x 00:32:24.596 ************************************ 00:32:24.596 START TEST keyring_linux 00:32:24.596 ************************************ 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:24.596 * Looking for test storage... 00:32:24.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.596 11:40:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.596 11:40:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.596 11:40:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.596 11:40:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.596 11:40:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.596 11:40:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.596 11:40:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:24.596 11:40:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:24.596 /tmp/:spdk-test:key0 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:24.596 11:40:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:24.596 11:40:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:24.596 /tmp/:spdk-test:key1 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2265298 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:24.596 11:40:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2265298 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2265298 ']' 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.596 11:40:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:24.855 [2024-07-26 11:40:20.295620] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:32:24.855 [2024-07-26 11:40:20.295737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265298 ] 00:32:24.855 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.855 [2024-07-26 11:40:20.370192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.855 [2024-07-26 11:40:20.494829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.113 11:40:20 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.113 11:40:20 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:25.113 11:40:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:25.113 11:40:20 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.113 11:40:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.372 [2024-07-26 11:40:20.776299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.372 null0 00:32:25.372 [2024-07-26 11:40:20.808353] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:25.372 [2024-07-26 11:40:20.808904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.372 11:40:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:25.372 541550184 00:32:25.372 11:40:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:25.372 983912903 00:32:25.372 11:40:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2265309 00:32:25.372 11:40:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:25.372 11:40:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2265309 /var/tmp/bperf.sock 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2265309 ']' 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.372 11:40:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.372 [2024-07-26 11:40:20.877103] Starting SPDK v24.09-pre git sha1 064b11df7 / DPDK 24.03.0 initialization... 00:32:25.372 [2024-07-26 11:40:20.877179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265309 ] 00:32:25.372 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.372 [2024-07-26 11:40:20.944416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.631 [2024-07-26 11:40:21.067624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.631 11:40:21 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.631 11:40:21 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:25.631 11:40:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:25.631 11:40:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:25.888 11:40:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:25.888 11:40:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:26.455 11:40:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:26.455 11:40:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:26.712 [2024-07-26 11:40:22.227474] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:26.712 nvme0n1 00:32:26.712 11:40:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:26.712 11:40:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:26.712 11:40:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:26.712 11:40:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:26.712 11:40:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.712 11:40:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:26.970 11:40:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:26.970 11:40:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:26.970 11:40:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:26.971 11:40:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:26.971 11:40:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.971 11:40:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:26.971 11:40:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@25 -- # sn=541550184 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 541550184 == \5\4\1\5\5\0\1\8\4 ]] 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 541550184 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:27.536 11:40:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.536 Running I/O for 1 seconds... 00:32:28.470 00:32:28.470 Latency(us) 00:32:28.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.470 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:28.470 nvme0n1 : 1.02 4956.56 19.36 0.00 0.00 25603.41 11747.93 37088.52 00:32:28.470 =================================================================================================================== 00:32:28.471 Total : 4956.56 19.36 0.00 0.00 25603.41 11747.93 37088.52 00:32:28.471 0 00:32:28.471 11:40:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:28.471 11:40:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:29.037 11:40:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:29.037 11:40:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:29.037 11:40:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:29.037 11:40:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:29.037 11:40:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:29.037 11:40:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.603 11:40:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:29.603 11:40:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:29.603 11:40:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:29.603 11:40:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:29.603 11:40:25 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.603 11:40:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:29.861 [2024-07-26 11:40:25.504978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:29.861 [2024-07-26 11:40:25.505387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2016fe0 (107): Transport endpoint is not connected 00:32:29.861 [2024-07-26 11:40:25.506378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2016fe0 (9): Bad file descriptor 00:32:29.861 [2024-07-26 11:40:25.507376] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:29.861 [2024-07-26 11:40:25.507399] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:29.861 [2024-07-26 11:40:25.507415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:29.861 request: 00:32:29.861 { 00:32:29.861 "name": "nvme0", 00:32:29.861 "trtype": "tcp", 00:32:29.861 "traddr": "127.0.0.1", 00:32:29.861 "adrfam": "ipv4", 00:32:29.861 "trsvcid": "4420", 00:32:29.861 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.861 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.861 "prchk_reftag": false, 00:32:29.861 "prchk_guard": false, 00:32:29.861 "hdgst": false, 00:32:29.861 "ddgst": false, 00:32:29.861 "psk": ":spdk-test:key1", 00:32:29.861 "method": "bdev_nvme_attach_controller", 00:32:29.861 "req_id": 1 00:32:29.861 } 00:32:29.861 Got JSON-RPC error response 00:32:29.861 response: 00:32:29.861 { 00:32:29.861 "code": -5, 00:32:29.861 "message": "Input/output error" 00:32:29.861 } 00:32:30.119 11:40:25 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:30.119 11:40:25 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:30.119 11:40:25 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:30.119 11:40:25 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:30.119 11:40:25 keyring_linux -- keyring/linux.sh@33 -- # sn=541550184 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 541550184 00:32:30.120 1 links removed 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@33 -- # sn=983912903 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 983912903 00:32:30.120 1 links removed 00:32:30.120 11:40:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2265309 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2265309 ']' 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2265309 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265309 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265309' 00:32:30.120 killing process with pid 2265309 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@969 -- # kill 2265309 00:32:30.120 Received shutdown signal, test time was about 1.000000 seconds 00:32:30.120 00:32:30.120 Latency(us) 00:32:30.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.120 =================================================================================================================== 00:32:30.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.120 11:40:25 keyring_linux -- common/autotest_common.sh@974 -- # wait 2265309 00:32:30.378 11:40:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2265298 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2265298 ']' 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2265298 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265298 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265298' 00:32:30.378 killing process with pid 2265298 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@969 -- # kill 2265298 00:32:30.378 11:40:25 keyring_linux -- common/autotest_common.sh@974 -- # wait 2265298 00:32:30.946 00:32:30.946 real 0m6.341s 00:32:30.946 user 0m12.854s 00:32:30.946 sys 0m1.776s 00:32:30.946 11:40:26 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:30.946 11:40:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:30.946 ************************************ 00:32:30.946 END TEST keyring_linux 00:32:30.946 ************************************ 00:32:30.946 11:40:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:30.946 11:40:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:30.946 11:40:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:30.946 11:40:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:30.946 11:40:26 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:30.946 11:40:26 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:30.946 11:40:26 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:30.946 11:40:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.946 11:40:26 -- common/autotest_common.sh@10 -- # set +x 00:32:30.946 11:40:26 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:30.946 11:40:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:30.946 11:40:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:30.946 11:40:26 -- common/autotest_common.sh@10 -- # set +x 00:32:33.479 INFO: APP EXITING 00:32:33.479 INFO: killing all VMs 00:32:33.479 INFO: killing vhost app 00:32:33.479 INFO: EXIT DONE 00:32:34.857 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:32:34.857 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:34.857 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:34.857 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:34.857 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:34.857 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:34.857 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:34.857 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:34.857 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:34.857 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:34.857 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:34.857 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:34.857 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:34.857 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:34.857 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:34.857 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:34.857 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:36.763 Cleaning 00:32:36.763 Removing: /var/run/dpdk/spdk0/config 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:36.763 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:36.763 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:36.763 Removing: /var/run/dpdk/spdk1/config 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:36.763 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:36.763 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:36.763 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:36.763 Removing: /var/run/dpdk/spdk2/config 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:36.763 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:36.763 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:36.763 Removing: /var/run/dpdk/spdk3/config 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:36.763 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:36.763 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:36.763 Removing: /var/run/dpdk/spdk4/config 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:36.763 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:36.763 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:36.763 Removing: /dev/shm/bdev_svc_trace.1 00:32:36.763 Removing: /dev/shm/nvmf_trace.0 00:32:36.763 Removing: /dev/shm/spdk_tgt_trace.pid1991227 00:32:36.763 Removing: /var/run/dpdk/spdk0 00:32:36.763 Removing: /var/run/dpdk/spdk1 00:32:36.763 Removing: /var/run/dpdk/spdk2 00:32:36.764 Removing: /var/run/dpdk/spdk3 00:32:36.764 Removing: /var/run/dpdk/spdk4 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1989561 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1990291 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1991227 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1991672 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1992358 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1992498 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1993214 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1993278 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1993552 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1994924 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1995890 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1996275 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1996473 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1996796 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1996986 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1997150 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1997309 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1997511 00:32:36.764 Removing: /var/run/dpdk/spdk_pid1998065 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2001580 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2001849 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2002021 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2002095 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2002458 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2002587 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2003024 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2003147 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2003437 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2003467 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2003635 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2003766 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2004257 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2004413 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2004611 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2006836 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2009651 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2016845 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2017368 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2019980 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2020186 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2023093 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2026958 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2029281 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2036727 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2042104 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2043419 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2044088 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2055013 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2057302 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2084513 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2087821 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2091849 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2096053 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2096056 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2096715 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2097267 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2097907 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2098306 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2098315 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2098574 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2098714 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2098716 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2099368 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2099949 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2100563 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2100960 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2100968 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2101224 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2102240 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2103047 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2108972 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2140440 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2143629 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2144809 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2146075 00:32:36.764 Removing: /var/run/dpdk/spdk_pid2146268 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2146402 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2146547 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2146990 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2148377 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2149294 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2149716 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2151587 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2152115 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2153194 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2155752 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2161823 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2164460 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2168362 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2169319 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2170415 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2173135 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2175506 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2179879 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2179881 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2182921 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2183108 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2183306 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2183572 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2183581 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2186357 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2186806 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2189623 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2192094 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2195772 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2199509 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2206915 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2211407 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2211412 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2225338 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2225866 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2226413 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2226946 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2227522 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2227946 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2228466 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2228881 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2231522 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2231669 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2235594 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2235654 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2237376 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2242417 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2242429 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2245356 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2246870 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2248267 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2249006 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2250418 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2251401 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2257368 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2257732 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2258119 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2259683 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2260081 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2260372 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2262813 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2262921 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2264720 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2265298 00:32:37.022 Removing: /var/run/dpdk/spdk_pid2265309 00:32:37.022 Clean 00:32:37.281 11:40:32 -- common/autotest_common.sh@1451 -- # return 0 00:32:37.281 11:40:32 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:37.281 11:40:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:37.281 11:40:32 -- common/autotest_common.sh@10 -- # set +x 00:32:37.281 11:40:32 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:37.281 11:40:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:37.281 11:40:32 -- common/autotest_common.sh@10 -- # set +x 00:32:37.281 11:40:32 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:37.281 11:40:32 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:37.281 11:40:32 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:37.281 11:40:32 -- spdk/autotest.sh@395 -- # hash lcov 00:32:37.281 11:40:32 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:37.281 11:40:32 -- spdk/autotest.sh@397 -- # hostname 00:32:37.281 11:40:32 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:37.539 geninfo: WARNING: invalid characters removed from testname! 00:33:59.000 11:41:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:59.259 11:41:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:07.378 11:42:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:15.531 11:42:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:22.098 11:42:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.215 11:42:25 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:38.329 11:42:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:38.329 11:42:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.330 11:42:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:38.330 11:42:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.330 11:42:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.330 11:42:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.330 11:42:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.330 11:42:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.330 11:42:33 -- paths/export.sh@5 -- $ export PATH 00:34:38.330 11:42:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.330 11:42:33 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:38.330 11:42:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:38.330 11:42:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721986953.XXXXXX 00:34:38.330 11:42:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721986953.DcozZk 00:34:38.330 11:42:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:38.330 11:42:33 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:38.330 11:42:33 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:38.330 11:42:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:38.330 11:42:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:38.330 11:42:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:38.330 11:42:33 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:38.330 11:42:33 -- common/autotest_common.sh@10 -- $ set +x 00:34:38.330 11:42:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:38.330 11:42:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:38.330 11:42:33 -- pm/common@17 -- $ local monitor 00:34:38.330 11:42:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.330 11:42:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.330 11:42:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.330 11:42:33 -- pm/common@21 -- $ date +%s 00:34:38.330 11:42:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.330 11:42:33 -- pm/common@21 -- $ date +%s 00:34:38.330 11:42:33 -- pm/common@25 -- $ sleep 1 00:34:38.330 11:42:33 -- pm/common@21 -- $ date +%s 00:34:38.330 11:42:33 -- pm/common@21 -- $ date +%s 00:34:38.330 11:42:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986953 00:34:38.330 11:42:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986953 00:34:38.330 11:42:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986953 00:34:38.330 11:42:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721986953 00:34:38.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986953_collect-vmstat.pm.log 00:34:38.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986953_collect-cpu-load.pm.log 00:34:38.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986953_collect-cpu-temp.pm.log 00:34:38.330 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721986953_collect-bmc-pm.bmc.pm.log 00:34:38.898 11:42:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:38.898 11:42:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:38.898 11:42:34 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:38.898 11:42:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:38.898 11:42:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:38.898 11:42:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:38.898 11:42:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:38.898 11:42:34 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:38.898 11:42:34 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:38.898 11:42:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:38.898 11:42:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:38.899 11:42:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:38.899 11:42:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:38.899 11:42:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.899 11:42:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:38.899 11:42:34 -- pm/common@44 -- $ pid=2276294 00:34:38.899 11:42:34 -- pm/common@50 -- $ kill -TERM 2276294 00:34:38.899 11:42:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.899 11:42:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:38.899 11:42:34 -- pm/common@44 -- $ pid=2276296 00:34:38.899 11:42:34 -- pm/common@50 -- $ kill -TERM 2276296 00:34:38.899 11:42:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.899 11:42:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:38.899 11:42:34 -- pm/common@44 -- $ pid=2276298 00:34:38.899 11:42:34 -- pm/common@50 -- $ kill -TERM 2276298 00:34:38.899 11:42:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:38.899 11:42:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:38.899 11:42:34 -- pm/common@44 -- $ pid=2276328 00:34:38.899 11:42:34 -- pm/common@50 -- $ sudo -E kill -TERM 2276328 00:34:38.899 + [[ -n 1899501 ]] 00:34:38.899 + sudo kill 1899501 00:34:39.168 [Pipeline] } 00:34:39.186 [Pipeline] // stage 00:34:39.192 [Pipeline] } 00:34:39.229 [Pipeline] // timeout 00:34:39.235 [Pipeline] } 00:34:39.254 [Pipeline] // catchError 00:34:39.261 [Pipeline] } 00:34:39.275 [Pipeline] // wrap 00:34:39.282 [Pipeline] } 00:34:39.293 [Pipeline] // catchError 00:34:39.302 [Pipeline] stage 00:34:39.304 [Pipeline] { (Epilogue) 00:34:39.318 [Pipeline] catchError 00:34:39.321 [Pipeline] { 00:34:39.335 [Pipeline] echo 00:34:39.337 Cleanup processes 00:34:39.342 [Pipeline] sh 00:34:39.624 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:39.624 2276429 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:39.624 2276561 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:39.637 [Pipeline] sh 00:34:39.966 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:39.966 ++ grep -v 'sudo pgrep' 00:34:39.966 ++ awk '{print $1}' 00:34:39.966 + sudo kill -9 2276429 00:34:39.977 [Pipeline] sh 00:34:40.260 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:02.203 [Pipeline] sh 00:35:02.488 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:02.488 Artifacts sizes are good 00:35:02.503 [Pipeline] archiveArtifacts 00:35:02.510 Archiving artifacts 00:35:02.765 [Pipeline] sh 00:35:03.049 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:03.064 [Pipeline] cleanWs 00:35:03.075 [WS-CLEANUP] Deleting project workspace... 00:35:03.075 [WS-CLEANUP] Deferred wipeout is used... 00:35:03.081 [WS-CLEANUP] done 00:35:03.084 [Pipeline] } 00:35:03.106 [Pipeline] // catchError 00:35:03.121 [Pipeline] sh 00:35:03.408 + logger -p user.info -t JENKINS-CI 00:35:03.418 [Pipeline] } 00:35:03.438 [Pipeline] // stage 00:35:03.443 [Pipeline] } 00:35:03.461 [Pipeline] // node 00:35:03.468 [Pipeline] End of Pipeline 00:35:03.503 Finished: SUCCESS